Agentic browsing – it's getting serious

When AI now acts, the allowed error margin suddenly becomes much smaller. Inhererent shortcomings of LLMs and their vulnerability against manipulations and attacks, e.g., prompt injections, are now a real problem.

For years now – almost as long as the generative AI boom itself has been going on – there have been warnings about attacks on AI systems. The most prominent of these are prompt injections, but there are also other forms of manipulation. At the same time, we have developed a more nuanced view of the shortcomings of LLMs: we now know that they hallucinate, have no real grounding and do not understand what a company's goals are. All of this has made it difficult for companies to launch AI applications that are truly productive. To date, there are hardly any enterprise applications that fully automate processes or interact directly with customers.

For users, this was harmless until now

Most of us have so far found these weaknesses in LLMs to be more annoying or amusing than anything else. No wonder:

  • When we ask for suggestions, mistakes are no big deal – we simply choose the best one and make a few corrections.
  • When we use AI tools privately, i.e. only we and the LLM provider are involved, there is little scope for attack and little potential for hackers.
  • And as long as we remain in control, i.e. the chatbot never acts on our behalf, manipulation is not a major issue.

All of this has lulled us into a false sense of security. We think, ‘It can't be that bad. These prompt injections are more of a theoretical problem.’

Wrong. The reason why things have gone smoothly so far is simple: AI has always been our conversation partner, never our representative. And we have never really let AI act. We have always been the final authority – we have corrected, checked and secured.

This is changing now.

With Agentic Browsing it's getting serious.

As soon as AI systems start acting independently, the risks change dramatically. If an AI browser suddenly orders 100 power drills because it has no practical knowledge and does not understand that this is an absurd order quantity for a private individual, then it is no longer a funny anecdote that we post on X – suddenly the parcels are sitting on our doorstep. Here, prompt injections are no longer academic gimmicks. This is about our credit card information, accounts, email contacts – and how quickly they can end up in the hands of hackers.

What that means

There will be incidents geben, many of them.

The gigantic social experiment we are all currently participating in is entering its next phase. A phase that brings the real challenges to the table. One that is not about AGI, but perhaps about whether we give our AI browser £10 or £20 pocket money that it can spend without asking us.

© 2026 LINK2AI GmbH. All right reserved.