Nesting

08 Nov 2020 - Vidar Hokstad

Follow @BoundGalaxy

Professor Greene spread his wings while he brushed six fingers over his facial hair. I did not dare disturb him while he was thinking, and waited anxiously for his feedback.

"You're right, this does pose a problem. The simulation will grind to a halt if they keep this up."

We looked at the footage of the simulated people in their simulated lab as they were coming to terms with the simulated complexities of building a simulated simulation just like the simulation they were in.

The problem was obvious to both of us. All our careful optimization worked through layers on layers of "dirty hacks". We cared about observing and learning from their behaviour, and so we did not need every little physical detail of the simulation to be precise.

The simulation carefully tracked what they observed, and discarded all state we could tell they could not keep track of. This again allowed us to massively prune what we simulated accurately vs. what we replaced with crude approximations.

Weather, for example, is so inherently complex, that we could generate random perturbed patterns from crude, cheap to compute models that matched their expectations of complexity. We only needed to fill in the detail in narrow zones around simulated weather stations, and to a lesser extent around simulated people.

"As far as I can tell their simulation logs enough data that there's no obvious way for us to just synthesize predictions. If something looks out of place, they'll try to trace it, and they'll spot discrepancies."

Professor Greene had started pacing.

"It'll ruin all our work! It's already a massive resource drain to simulate their damn computers, but this is on another scale entirely. It'll slow the simulation down orders of magnitude."

We simulated their computers as much as possible by "lifting". We translated their programs into code that could run on our computers.

Heavily sandboxed and firewalled of course, we didn't want to risk an "escape" - anything that would let their code detect details of something outside their simulated reality.

We'd "tap" the inputs of their computers, and send it to the translated programs running on ours, and feed the outputs back. To them it looked like their computers did the work, but we just simulated the power drain and heat they'd cause, and the people running them were none the wiser.

Their own computers didn't atually do anything unless they hooked up diagnostics equipment. In those rare cases we'd let them work, while they observed. The system automatically detected this. It was quite a clever bit of code.

But simulating the kind of large-scale simulation they were setting up... that was another thing entirely.

"You don't think they suspect?" Greene asked.

"Well, they have come up with a quite reasonable understanding of simulation," I told him. "But it's still seen as fringe. We've checked their communications. They're not doing this to test if they're themselves in a simulation in any way. They're motivations are pretty much the same as ours at this point. They want to understand consciousness, just like us."

"Whatever their motivations, we need to figure out a workaround."

Greene put on his glasses.

"We'll continue tomorrow. It's late."

He went outside, and I could see him fly off towards his nest through the window.

I wasn't ready to give up yet, and spent the evening probing and prodding our simulation to see what I could come up with.

I don't know what time I fell asleep. Only that it was bright outside by the time professor Greene tapped me on the shoulder and woke me up.

"I think I have a solution," was the first thing I said to him.

It was not clear to me when it had come to me. It might have been the previous evening as I was going through our code. It might have come to me in a dream.

But it was a logical extension to our growing pile of hacks.

"Do tell," he said, with a smile.

"The problem is they're logging extensive data about everything, right? Full traces?"

"Yes, that's the biggest problem. The complexity of faking the data is one thing, but cross-correlating it and fixing all their logs and all the dependent data, and even their memories if any of them remember details would be almost as complex as allowing them to run their simulation itself."

"But we do the same thing! We have extensive logs as well. And since the simulated world is based on ours, everything is a relatively close match. After all, we want to understand our world."

"Are you suggesting?"

He paused.

"You're suggesting we feed them our logs?"

"Exactly! We just perturb them a bit to prevent it getting too recognizable."

He thought it over and gave it his approval.

It was an elegant solution, I though. We'd train one of our detectors to recognise attempts at running a large scale universe simulation. It would alert us if they tried running anything unusual, but assuming the simulation matched the parameters of our own simulation, it'd add a patch - bypassing the slow, low-level computer simulation, and even the faster dynamic translator, and instead start feeding them traces from our own recordings of our own simulation of them.

They'd look at their data, and think it was their simulation, but instead they were looking at their real lives.

There were many details to figure out, but we'd caught them early - they still had lots of coding to do. And of course we could arbitrarily slow down the simulation if necessary.

We were apprehensive when they first switched their simulation on, but also certain that they'd write off any discrepancies in the results as bugs, and give us a chance to address them.

We were right.

"Congratulations," professor Greene said as we were watching them in their simulated lab, watching the output from our/their simulation, and celebrating our/their success. The scene was so familiar. Just how we had celebrated when we first got everything running.

He held out his hand, and I grabbed it. He rarely gave praise, so when he did, it meant a lot.

I watched them long after he had flown home, before I too flew to my nest.

We observed their simulation exercise less and less often over the coming weeks and months. The initial urgency was over, and we had a big world to monitor, and research papers to write about what we were learning.

One morning the alarm went.

The one we had set to trigger if something abnormal was happening with their simulation project.

When professor Greene joined me, I had determined the cause already.

"Our latest speedups... They mean the simulation will be a bottleneck."

We'd had a whole team spend months on new hacks to accelerate the next version of the simulator so it'd now on average run about twice "real time" - a standard 26 hour day would be simulated in a little under 13 hours.

Some ingenuous engineering coupled with a grant for more computing power meant we hoped to get it down to less than 3 hours per day evenutally.

But in our excitement we'd forgotten their simulation.

"They're rapidly catching up to our logs. When we run out, we have to slow everything down or let their simulation actually run whenever it overtakes. Which will also slow everything down."

"We can't do that. We'd have all kinds of questions about why we spent that time and money only to be unable to speed things up. It defeats the entire purpose of the new grants."

Professor Greene rarely shouted like this.

He paced for a bit. Then he brought up some notes and looked it over.

"But we might have a solution. The new Eraser module."

"You mean? But it'll waste a lot of effort too."

"You're right, but their simulation project isn't important to us. We'll need to verify, but I think the impact on their society will be minor enough to be acceptable."

We agreed to meet again a couple of days later, when we'd had a chance to work through the implications.

"It all checks out," I told him. "We can delete the lab and the lead scientists with relatively small impact."

The Erasure module was a new, last resort, cleanup tool. It'd trace memories and observations of someone by other people in the simulation and "smudge them" into a generic memory of non-specific people that made the simulated person we deleted disappear not just from the simulation, but from the memories of the other simulacra.

They'd have just a vague recollection of someone they'd be unable to place.

We'd erase the whole lab, wipe their simulation, and hope it'd be a long time before someone else would come up with the idea. It'd save us a whole lot of trouble. But they'd be gone.

I'd not paid attention to what they were up to in there for a while, but it felt a bit weird. After all I'd spent a lot of time watching these simulations and their lives. I told myself it was okay. After all they were not real.

"Let's do it," professor Greene said.

I typed a command and watched the ripple through the simulation as the lab disappeared and with it professor Browne and his assistant, right in the middle of dealing with the same problem we had been. Reduced to some data in our logs.

---

"Let's do it," professor Reede said.

I typed a command and watched the ripple through the simulation as the lab disappeared and with it professor Greene and his assistant, right in the middle of dealing with the same problem we had been. Reduced to some data in our logs.

---

"Let's do it," professor Jaune said.

...

Want to discuss this?

You can try posting this on /r/galaxybound on Reddit (I'd love it if you post about it on other subreddits too, but /r/galaxybound is where you can be sure I'll see it), or connect with me on Twitter.

It's 2144. In a year the Earth-Centauri gate opens. Some predict it will be the end of humanity.

Out Now: Galaxy Bound Book 1

The Year Before the End

What would you do if the only way to stop an alien invasion was to break into one of the best defended space stations in the solar system?