We used to call them beefburgers, on the very sensible basis that they are made of beef. As we have fought against transatlantic currents this has been shortened to simply burgers. Burger is short, to the point and crucially doesn't need bother you with its content. This is fine, and I have no problem with this linguistic shift. What I do have a problem with is the shift in proportions of said burger.
Burgers are essentially sandwiches. This makes even more sense when considering sandwiches outside of the relatively spartan prepacked meal deals of the UK. Burgers are substantial sandwiches, but fundamentally they are still meant to be eaten with your hands. Now, I know this isn't a particularly original observation, but at some point burgers went from a squat shape to a tall one that no longer fits in my mouth. This is something that perennially irks me. Why does it need to be so big? The ratio of height to diameter of some is now so large that it's almost a good eating strategy to skewer it through the centre of the buns and gnaw at it like corn on the cob. Inevitably I resort to pressing it down in an attempt to flatten it, removing slippery tomatoes or slicing it into more manageable chunks. Ultimately however I end up with grease everywhere and cheese stuck to my chin. There is little dignity in the process.
For some of you, this may not seem like much of a big deal. In fact some people may even enjoy the messiness of the process. If this is you, then I have good news - very soon your entire digital experience will be one gigantic, misshapen calamity that you will be incapable of handling without getting mess everywhere.
I've complained about AI and enshittification before. This is definitely part of the problem, but it's dangerously lazy to think that the worst will be done by AI being sloppy. The absolute worst will be done by people using it well. These people will be those who want to make money from you, and they will come at you from both ends of the law. At the one end, which currently drifts in a haze of legal possibilities, is privacy at scale. The zettabytes of data that now links us all in in a global network is ripe for exploitation. The boundary of that exploitation is constantly shifting, but as more wealth and power transfers to the tech giants, its direction of travel is a weakening of data protection in its most absolute sense. Exceptions are made, and with them our sense of what is normal shifts. When Facebook first introduced facial recognition there was a backlash of concern over privacy. However since then Google Glass has been and gone with less fuss, police forces routinely use facial recognition in western democracies and Ring have announced it as a new feature for their doorbell cameras. The latter is significant. The ubiquitous doorbells have enrolled large swathes of society into surveillance culture, to the point at which facial recognition has become an acceptable convenience.
So who cares if they're being recorded and maybe even recognised as they walk past someone's house? Aren't we all being recorded everywhere already? Absolutely, yes, we are. In most urban environments the major modes of video surveillance are traffic cameras, street-facing security cameras and internal security cameras in businesses and homes. Some of these are more secure than others, either by accident or design. It is theoretically possible to track your movement all day long by using these. What generally stops this being a real concern unless you have _really_ pissed off the authorities, is that all these systems are separate. It would take a team of skilled people to track you across such a network. Government agencies, with access to more private data such as interactions with contactless payment devices and so on stand a better chance of success. So why worry if you haven't done anything the government would be interested in?
Well, step in our old chum AI. The bottleneck that has been preventing the real-time surveillance of individuals is the greater than real-time required to process the colossal amount of data available at any one time. Chomping though this sort of data is something that AI is very good at however. It can easily (for some values of easy) compile details of a person's activities by collating video data matched by facial recognition and non-video data matched by location.
Again, you may think this is nothing to worry about. Why would people with such technology be interested in you?
The problem at this point is that it will no longer be just the authorities that can do this. Other actors with different ambitions will seize the opportunity. We already see this in action through older technology. Cold callers talk pensioners into investing into a scam. Phishers trick you into clicking on the wrong link. Some of the latter can be hard to spot if you're not paying attention, especially on a phone, which is where most of our online interactions are right now. Can you trust that the person who sent you the link hasn't been hacked or spoofed? You're internet savvy, you can figure this out. But then, while you're trying to work out whether to click on said link, you get a facetime from your sister. Her car has broken down again and she can't pay for the repair. Your phone doesn't recognise the contact but that is barely noticeable. You recognise your own sister. So does your phone - it helpfully tags her onscreen. Besides, the last few Facebook posts you'd seen from her had been griping about how her car was on its last legs. And you can afford to lend her the money after that #cheltenham win. Just click the convenient link she's provided and give it no further thought.
It's an audacious scam, but one which will be automated. It will be automated at some point in the near future, but it will mostly operate in the past. This isn't a time-travel conundrum. It's the other part of the AI surveillance puzzle, which is our collective internet history.
Interacting with the internet is rarely completely anonymous, especially if you spend any time in some form of public space, be that a publicly available website or social media platform. You have a history, and unlike your browser history, flushing it is not a one-click operation. It's all out there for anyone to find. Again, in the recent past this was not necessarily a problem unless you attracted a determined stalker. But here comes AI again, with its tireless ability to search and collate information. It can work through social networks and the wider internet, discovering trusted connections between people, be they family or friends. Furthermore it can use their conversations to work out which connections are most easily exploited - it can see your sister complaining about her car. It has seen you boast about your winning bet on the horses. That is the tricky part, and will be played like traditional email spam, as an overwhelming numbers game where they just need a few bites to make it worthwhile. The easy part is the part that used to be hard. With a trail of photos and videos behind you both, it will be trivial for the same AI agent that targeted you to create a live simulation of your sister. It will look and sound like her. It will know her history from her online history and use that to engage with you and keep you on the parameters it has been tasked with keeping you on until you transfer the money. It can even use the previously mentioned public and unsecured cameras with facial recognition to track you both so it can work out the most opportune moment to initiate the scam.
I don't know the answer to this. I have a nagging urge to erase my internet presence completely, but (a) the internet is kind of home at this point and (b) I'm not convinced it's possible. So maybe I can solve the burger problem instead. Is it fundamentally about keeping a low profile? I'm not sure, but in the interests of fending off attention from future malignant AI agents, I find them all too expensive for what they are.






