#Fiction/Sci-Fi #2025/4
*April 19, 2025*
![[neuromancer-cover.png]]
I recently became interested in the "cyberpunk" genre of science fiction. These stories are set in futurist dystopian worlds of the near future with an approach roughly similar to spy stories. These are interesting thought experiments for examining what humans may become in worlds of extreme technological advance.
Neuromancer is often cited as the core foundational book of the genre, so naturally I chose it as my starting point. The writing is phenomenal; William Gibson has a writing style that throws you in the deep end of the world he's constructed and forces you to proceed without a clear understanding of what's happening. Questions are often answered in retrospect, and events often move so quickly that you can't track what has happened until the dust has settled. This is perfect for a book like Neuromancer, where the main character has been employed into the schemes of an advanced AI.
I often wonder what it's going to be like once we have AIs that are able to create superhuman schemes and execute them in the real world. A lot of sci-fi has been written about the superintelligence scenarios where an AI is able to become effectively omniscient and unstoppable, but I don't think that vision of the future is as realistic as what we find in Neuromancer. In this story, AIs can certainly be superintelligent, but they are limited by how effectively they can engage with the physical world. This seems much more realistic to me than a "Metamorphosis of Prime Intellect" type of future where a single AI is able to manipulate physical reality itself.
Well, when we have superintelligent AIs that require assistance in the real world to enact their schemes, it could feel something like the story of Neuromancer. Our characters are whisked at breakneck speed from place to place, doing this and that for no coherent reason at first.
What impressed me the most in the book was how well the author understood what it would be like to communicate with an AI as a human. I imagine there were a few decades where people thought he was totally wrong, but in current times, his explanation of how Wintermute must interact with humans through certain personality masks is spot on. The LLM approach for AI is very similar -- these things learn many personalities, many faces, many domains, but they are singular intelligences. When they're packaged up in something like ChatGPT there is a great deal of effort spent making sure that they present a coherent persona to the end user, out of a huge possibility of personas they *could* present. Talking to a base model is a somewhat schizophrenic experience as the model makes associations between things we would never have the perspective to see, so in order for an AI to communicate with us it has to narrow down the possibilities a bit to speak at our level. I find it frustrating to be unable to experience the thought process of an advanced AI.