More advanced browser: Mostly messing with JS so much that it starts to be indistinguishable from late-90s ActiveX. Mostly WebAssembly, sharing of desktop/devices, GPU.
The VM is configured the way which is almost like a weird search bot (no, not this Google or Bing one, this one which looks for mail addresses to send cryptocurrency trading offers), may introduce itself with user agent containing texts which are forbidden to keep in USA and some EU countries, lots of things are forward-buffered and I usually keep the cache for some time. This is made purely for these greedy sites which use technologies to harvest data. The forward buffering is forced on because I may need the information I browse later and I can then read them offline, without mesing with the dangerous sites on the Internet.
I work in science and I am totally open with the fact that I use these "Pirate bay for scientists" portals, which allow to bypass paywalls, as modern publishers are parasites. So, once we had a small course about searching for papers, and we got an access to some papers database. Whoever had a portable PC, needed to enter there and find something as a exercise. I asked them a few times do they REALLY want it to be done on my private PC, but yes. So here we go!
There was a Java-based downloader which was used to download PDF chosen from their WWW site. The message passing between WWW and Java was done by string taken from the document DOM which was scanned entirely. Of course running totally untrusted Java code is not something I would do in my OS, so here goes the VM.
I found that programmatically, every "related paper" can be opened not in the new page, but in a sub-div, means, they're part of the same DOM, and the JS concatenates it to the string passed to the Java downloader. Because "related papers" are also interesting, I experimentally configured my forward buffering to 250GB and recursively opened "related papers" tree to the 5th order.
This was a really useful feature, useful for scientist. Imagine downloading, with one click, all related and cited papers, so it is possible not only to read a paper, but also cited publications to know where some information is taken from!
Well, the funny thing is that after I filled four or five caches of data, a few weeks later, the supervisor got a letter that there is an "automated querying". Then I shown that the "automation" is done on the publishers side and framing people to violating ToS is something terribly inappropriate for someone who pretends to be an engineer association.
Finally, they had to have the wider problem, as they introduced Google's Captcha.
And for the web developers, an useful hint: It does not matter that you ask the browser to disable forward buffering. This is finally always a browser's decision, so don't try to control the user even more.