Q&A with Mark Burgess – Smart Spacetime

March 10, 2019

Q: William Louth is a renowned software product designer/developer and an experienced systems engineer with particular expertise in self-adaptive software runtimes, adaptive control, cybernetics, resilience engineering, software simulation & mirroring, application performance monitoring and management as well as cost execution optimization and scalability engineering.

A: Mark Burgess is a theoretician and practitioner in the area of information systems, whose work has focused largely on distributed information infrastructure. He is known particularly for his work on Configuration Management and Promise Theory. He was the principal Founder of CFEngine, ChiTek-i, and now co-founder and chief innovation officer at Aljabr Inc. He is emeritus professor of Network and System Administration from Oslo University College. He is the author of numerous books, articles, and papers on topics from physics, Network and System Administration, to fiction. He also writes a blog on issues of science and IT industry concerns. Today, he works as an advisor on science and technology matters all over the world.

 

How would you compare this new book, Smart Spacetime, with your previous ones in terms of domain and scope? Is this a natural evolution of the thinking behind Promise Theory or something entirely new and different?

That’s a great question. It began as a continuation of my work on promise theory, about five years ago when I decided to explore what it might mean to attach smart behaviors to every possible location around us — then, in the context of the Internet of Things. But the more I worked on it, the more I realized there were deep ideas that needed to be explained to a broader audience. As a physicist, I grew up on spacetime, but that’s not everyone’s diet!

 

Can you give a short description of spacetime and why it is, or needs, to be Smart? Why is it useful to acquire a greater understanding of spacetime?

The term spacetime was coined around the time of Einstein’s work on relativity. I suppose the short answer is that he showed that our ideas of space and of time couldn’t be separated, because the world is made up of processes, not empty slots and a kind of mystical wind pushing to the future. So the term spacetime means process. As for why it needs to be smart, well that’s an attention grabber, but the long term consequence of describing processes at all different scales, from simple to complex, is that we can understand “smart” or “intelligent” behaviors too. That’s obviously of interest in connection with AI, but I believe it’s all part of the same story.

What I realized from my work on formalizing through Promise Theory was that the fundamental processes of spacetime are at the root of many different phenomena from basic physics to biology and IT. The papers I wrote (even though they were sketchy overviews) lost a lot of people by being too technical. This is my attempt to write a more accessible book.

 

What do you mean by process? How do we distinguish it from a series of events? Is it an entity or object of concern or something that exerts an external or internal influence on things we have identified within an environment?

Events are “passive” observations of change made by a process. Processes are sequences of change generated from within some agent, at some scale. That includes the case where you have a collection of agents acting as a single superagent. I use air “quotes” because observation is also a process the comes from within. I think your question is about knowing the relationship between observed events or changes and the process that caused them. There are straightforward mathematical ways to define that, though it’s more complicated than one might think.

If an observer sees a stream of events, he, she or it might infer the existence of a process that was their origin. They might be right, or they might be mistaken — the events may originate from different processes and only look like they belong together. This is a problem of relativity.

 

Where does a process exist? Does it have some containment of sorts? A process seems distinct from an environment or location? Are these concepts even applicable and useful anymore?

That’s a huge philosophical question. Going back in history, our ideas have been handed to us by Euclid and later Newton. Their concept of space was a kind of theatre in which processes took place—we call it the universe. The genius of Euclidean space is that it assigns a precise and unambiguous meaning to what it means for one location to be next to another location. But, in the dataverse (what I call the world of computers) for instance, things look a bit different. There are different levels of where things are and different definitions of when two things are next to one another (adjacent). You have a level of networking based on the wires. You have another level based on communication through channels, in the Shannon information sense. Then there is virtualization, VPNs, etc. It all boils down to communication. So you could argue that space and location don’t even have meaning without a process that communicates between locations. That changes the way you have to think about space and time.

When we talk about an environment, we think of an enveloping region locally next to us. But that whole idea is rooted in Euclidean thinking. In a more general sense, there is only context for processes. Context can be physical or virtual, as you define them, but can also be semantic. Being next to something else may depend on what kind of interaction you are having. I can be next to you without you being next to me! That’s just not allowed in Euclid or Newton.

 

We identify objects of interest and concern, here I assume that would be a process, by perceiving boundaries between objects and space between boundaries. How does this manifest in the dataverse? Is a context defined by the observer or the observable? It seems we can no longer think of space and time separately.

This is where Promise Theory (PT) helps a lot to define the problem. In PT you have agents, which are the sources and observers of all things. They correspond to locations in space, and their interior state corresponds to their sense of time. Einstein taught us that time is the rate at which your internal clock ticks, as you sample the stream of events from the exterior. The boundary defines the difference between interior and exterior is the most important choice. It defines the scale of the agent, which in turn defines its ability to tell time and identify inside and outside, or self and non-self. Context is usually considered to be non-self. That’s because we imagine the environment to be about space. But there can also be timelike context — what we were thinking about just now. That’s why the interior process is really the arbiter of all these concepts. They really don’t exist without a definition of what is inside and what is outside this arbitrary choice of a boundary. In the dataverse, you have apparent boundaries, like a computer, or a software container. But you also have virtual boundaries, like domains, clusters, etc.

 

This brings me back to our earlier discussion on processes and events where you mention that events are internally generated. How are these generated? Is this a process within another process? Is it turtles all the way down?

Yes! It’s definitely mutant turtles all the way down. In any theory, there are things that you just have to define as “semantic primitives” — axioms that can’t be explained. We build on those. The spacetime view says that a process is the only natural entity that makes consistent sense. Einstein showed us that space alone doesn’t make invariant sense, nor does time. But spacetime processes can be defined, along with information that passes between them.

In IT we like to think of objects as primitives—code, containers, computers, etc. That’s analogous to our material obsession with matter versus space. But those distinctions are mostly prejudices that come about from living in a very slow world, with very fast powers of observation.

 

“All models are wrong, but some are more useful than others.” How does this apply to the ideas and concepts you have introduced in this book? How will this change the way we design and build systems?

I explain this in my book. Or as Picasso said — art is the lie that enables us to realize the truth. In formulating a theory, you are looking for a consistent story, with characters and processes. Promise Theory seemed to be the simplest choice for me, one that made the fewest number of assumptions. Ultimately all you can do is to keep asking questions and picking away at the issues. There is no way to decide what’s true or not true. In fact, in the book, I also deconstruct how those ideas of true and false are also dependent on scale and relativity. That’s in the engineering sense, not in a fluffy philosophical sense. It was important to me to write a proper discussion of these ideas, in full, without skimping. There is a lot of fluff written about this—even in theoretical physics. I’ve tried hard to stick to simple rational ideas.

 

Promise Theory is very well received in the tech industry. How do you see this new thinking and writing being perceived? What are the key takeaways and what change, an essential element of the book? What would you like to happen in the short and long term?

(Laughs). If my past track record is anything to go by, people will start by dismissing it or by sending threatening emails, and will eventually rediscover it and perhaps embrace it. I’m hoping that, by making the story as readable as I can, I’ll reach out to a younger generation of minds not stifled by prejudice or vested interest. In a story of this scope, there are many takeaways, but I’ll feel I succeeded if people tell me they were able to see space and time in a much more general way and connect it to what they understand of their daily lives. There’s this deliberate mystification of concepts in physics—part of the marketing machine of popular science—that makes the concepts the property of sage individuals, like Einstein. Try to understand them at your peril. We end up with a cartoon view of physics, and I think that’s bad. I try to stay with my hands in the dirt and explain the issues as I see them.

I’m happy that, after almost 20 years, Promise Theory is getting some traction. There are still those that refuse even to acknowledge it. Science is like that. At the end of the day, I think my role is to try to open people’s eyes to a simple unifying story that brings together many others.

 

Do you think we can explicitly engineer processes (or artificial life) or are we always looking in the rearview mirror and only recognizing such after the fact?

Of course, you can try to create something in the image of something else. Turing even proved that you could simulate any process using a machine. That doesn’t mean you could get it exactly right. The elephant (or turtle) in the room as far as AI is concerned is that advocates just ignore scale. Processes are critically dependent on spacetime scales. If you make a very fast computer, will it look like a human or a fly? If you make it consume vast amounts of data, will it think like a human or a whale?

 

The scope of this new book seems massive and an incredible undertaking on your part. Congratulations! What is next in that crystal ball of yours?

Thanks for the kind compliment! This is undoubtedly ambitious and a bit nerve-wracking, because I know that a lot of people will hate it and I’m putting my head on the block. I always try not to write another book, but I’m sure there will be others. Now, having spent time explaining too much, I want to get back to the technical end of things and help people to solve the challenges while folks catch up.

 

I can’t help thinking that this new book is needed now with current trends in the technology industry. Thoughts?

I’ve been looking at data-pipelines with my new startup Aljabr.io lately. There’s a lot of potentials there, I think, to bring about the next stage of computing cloud evolution. But industry lags behind ideas, so we’ll see where the currents take me.

I want to think the book is needed. That’s certainly why I put myself through the process of writing. But, looking at the last book In Search of Certainty, it has taken five years for it to start being acceptable.

 

Do you have any ideas on how we can visually construct data flow processes with explicit modeling of spacetime?

I know this one will take even longer. But what is time anyway? (Laughs)

 

Agreed. One last question. Do we live in a simulated universe?

Who simulates the simulator? Turing taught us—it doesn’t matter, because you can’t tell the difference anyway.

 

Smart Spacetime: How information challenges our ideas about space, time, and process

What if space is not like we learn in mathematics, but more like a network? What happens to the ability to measure things as you shrink or expand? This is a book about physics, it’s about computers, artificial intelligence, and many other topics on surface. It’s about everything that has to do with information. It draws on examples from every avenue of life, and pulls apart preconceptions that have been programmed into us from childhood. It re-examines ideas like distance, time, and speed, and asks if we really know what those things are. If they are really so fundamental and universal concepts then can we also see them and use them in computers, or in the growing of a plant? Conversely, can we see phenomena we know from computers in physics? We can learn a lot by comparing the way we describe physics with the way we describe computers—and that throws up a radical view: the concept of virtualization, and what it might mean for physics.

Order from Amazon

Play with Instana’s APM Observability Sandbox

Start your FREE TRIAL today!

Instana, an IBM company, provides an Enterprise Observability Platform with automated application monitoring capabilities to businesses operating complex, modern, cloud-native applications no matter where they reside – on-premises or in public and private clouds, including mobile devices or IBM Z.

Control hybrid modern applications with Instana’s AI-powered discovery of deep contextual dependencies inside hybrid applications. Instana also gives visibility into development pipelines to help enable closed-loop DevOps automation.

This provides actionable feedback needed for clients as they to optimize application performance, enable innovation and mitigate risk, helping Dev+Ops add value and efficiency to software delivery pipelines while meeting their service and business level objectives.

For further information, please visit instana.com.