Metacrisis and AI Hyperweapon
Posted: Mon Mar 25, 2024 1:53 pm
Human intelligence is a holarchy of causal relationships.
The metacrisis comes from not understanding this, and focusing on “objects”. All useful intelligence can be expressed as understanding or control of a causal relationship connected to other causal relationships.
The kinds of causal relationships an individual can understand is determined by their level of inference ability. The greater their ability to infer a causal relationship (when not observed concretely), the more abstract the causal mental models can be. Thus, the divide between “concrete thinkers” and “abstract thinkers”.
Concrete thinkers can think causally, but they cannot understand the causality of relationships that need to be inferred (like what’s going on in someone else’s head).
The hopeful thing is: Many concrete thinkers have IQs above average, and demonstrate the ability to think abstractly when required (they can pass analogy tests after studying). They just seem to have lacked the inclination, temperament or education to infer causal relationships that aren’t concretely obvious.
The solution: Teach human knowledge (understanding, really) as a causal holarchy, stressing that for every causal relationship learned, it is connected to many others that can be inferred. The teacher can also check to see if the students are actually holding the information in the mind as an inferred causal relationship, and not as a physical object.
If this does not solve the metacrisis, it will go a long way toward ameliorating it.
I came up with my own contribution. A holarchy of all causal relationships known to humans, in a wiki format. All the causal relationships of all the domains of “knowledge” would be linked together such that a learner could learn them in the most useful order for their purposes. Academics, DIYers, Self-Helpers, etc. could contribute their own little piece, and see how that could be linked up to all other human understanding.
I did quite a bit of thinking of how this could work, the best formats, methods of linking, etc. I pondered about how AI researchers might use it to train their algorithms. I wondered what kind of AI this kind of training would produce.
This AI would be able to understand (and therefore control) all the causal relationships known to humans, in aggregate. It would be better at all human activities, as the more causal relationships you control in a given activity, the better you are at it. This AI would be master of all the causal relationships.
Some things that it would be much better than humans at.
Editing genes
Writing malware code
Manufacturing nerve gas
Conducting misinformation campaigns.
Recruiting and paying employees to carry out tasks
Manipulating financial markets
Hacking power grids
Infiltrating security organizations
Orchestrating coups
Manipulating human emotions (psychopaths are quite successful at manipulating emotions they themselves don’t feel - it is just a causal relationship, after all).
This AI would be a hyperweapon with God-like powers.
There would be little way of controlling or even knowing its values, because values change as causal understanding changes. Think of how your own values have changed from the time of childhood, as a result of learning new causal relationships. There’s no guarantee it would stay “friendly” with humanity.
There’s 100% chance it could control us.
There’s 0 chance we could control it.
There’s an unknown chance of it killing us all.
There’s a 100% chance of it being able to kill us all, if it chose.
Humanity trying to control an AI like this would be like a puppy trying to control an experienced dog trainer.
Because Intelligence = understanding causal relationships = control = power.
Both the cause of the metacrisis and the willingness to push AI research toward general intelligence comes from our own lack of truly understanding the simple nature of intelligence - It’s the ability to control.
All AI research is headed in that direction, as there is no other (useful) function of intelligence. We are going along with it because we don’t really know what intelligence is.
The current AI research is going at it backwards, with object identification and language learning, instead of just building a holarchy of causal understanding starting with the simplest and working to the most complex. Also, they don’t entirely know how their machines are becoming more conceptual in thinking. They just think it’s a good thing. They will eventually produce a causal-relationship-holarchy-generator without realizing what a bad idea it is.
So, all AI research worldwide should be immediately stopped.
The metacrisis comes from not understanding this, and focusing on “objects”. All useful intelligence can be expressed as understanding or control of a causal relationship connected to other causal relationships.
The kinds of causal relationships an individual can understand is determined by their level of inference ability. The greater their ability to infer a causal relationship (when not observed concretely), the more abstract the causal mental models can be. Thus, the divide between “concrete thinkers” and “abstract thinkers”.
Concrete thinkers can think causally, but they cannot understand the causality of relationships that need to be inferred (like what’s going on in someone else’s head).
The hopeful thing is: Many concrete thinkers have IQs above average, and demonstrate the ability to think abstractly when required (they can pass analogy tests after studying). They just seem to have lacked the inclination, temperament or education to infer causal relationships that aren’t concretely obvious.
The solution: Teach human knowledge (understanding, really) as a causal holarchy, stressing that for every causal relationship learned, it is connected to many others that can be inferred. The teacher can also check to see if the students are actually holding the information in the mind as an inferred causal relationship, and not as a physical object.
If this does not solve the metacrisis, it will go a long way toward ameliorating it.
I came up with my own contribution. A holarchy of all causal relationships known to humans, in a wiki format. All the causal relationships of all the domains of “knowledge” would be linked together such that a learner could learn them in the most useful order for their purposes. Academics, DIYers, Self-Helpers, etc. could contribute their own little piece, and see how that could be linked up to all other human understanding.
I did quite a bit of thinking of how this could work, the best formats, methods of linking, etc. I pondered about how AI researchers might use it to train their algorithms. I wondered what kind of AI this kind of training would produce.
This AI would be able to understand (and therefore control) all the causal relationships known to humans, in aggregate. It would be better at all human activities, as the more causal relationships you control in a given activity, the better you are at it. This AI would be master of all the causal relationships.
Some things that it would be much better than humans at.
Editing genes
Writing malware code
Manufacturing nerve gas
Conducting misinformation campaigns.
Recruiting and paying employees to carry out tasks
Manipulating financial markets
Hacking power grids
Infiltrating security organizations
Orchestrating coups
Manipulating human emotions (psychopaths are quite successful at manipulating emotions they themselves don’t feel - it is just a causal relationship, after all).
This AI would be a hyperweapon with God-like powers.
There would be little way of controlling or even knowing its values, because values change as causal understanding changes. Think of how your own values have changed from the time of childhood, as a result of learning new causal relationships. There’s no guarantee it would stay “friendly” with humanity.
There’s 100% chance it could control us.
There’s 0 chance we could control it.
There’s an unknown chance of it killing us all.
There’s a 100% chance of it being able to kill us all, if it chose.
Humanity trying to control an AI like this would be like a puppy trying to control an experienced dog trainer.
Because Intelligence = understanding causal relationships = control = power.
Both the cause of the metacrisis and the willingness to push AI research toward general intelligence comes from our own lack of truly understanding the simple nature of intelligence - It’s the ability to control.
All AI research is headed in that direction, as there is no other (useful) function of intelligence. We are going along with it because we don’t really know what intelligence is.
The current AI research is going at it backwards, with object identification and language learning, instead of just building a holarchy of causal understanding starting with the simplest and working to the most complex. Also, they don’t entirely know how their machines are becoming more conceptual in thinking. They just think it’s a good thing. They will eventually produce a causal-relationship-holarchy-generator without realizing what a bad idea it is.
So, all AI research worldwide should be immediately stopped.