Notes from the Artificial Intellligence and the Singularity conference in September.
Antarctica Starts Here. » Antarctica Starts Here. 2014-10-08
Summary:
As I've mentioned several times before, every couple of week the Brighter Brains Institute in California holds a Transhuman Visions symposium, where every month the topic of presentation and discussion is a little different. Last month's theme was Artificial Intelligence and the Singularity, a topic of no small amount of debate in the community. Per usual, I scribbled down a couple of pages of notes that I hope may be interesting and enlightening to the general public. A few of my own insights may be mixed in. Later on, a lot of stuff got mixed together as I only wrote down assorted interesting bits and stopped separating the speakers in my notes. My bad. As always, all the good stuff is below the cut...Monica Anderson of Sensai and Syntience, Inc. - Doing AI Wrong: We Can Always Pull the Plug
- Dual process theory - consequences for AI - theme questions
-
Daniel Kahneman - Thinking, Fast and Slow
- Two modes of thought: Intuitive understanding (fast), logical reasoning (slow)
- Understanding is a parallel process, very fast, subconscious, involuntary
- Once on, understanding can't be switched off
- Bandwidth of the human eye is about 10 megabits per second
- Bandwidth of reasoning is about 100 megabits per second
- Data reduction, epistemic reduction
- Libet delay is about 500 milliseconds (one-half second)
- Reductionism is exactly the use of models. Simplification of a rich reality.
- Reasoning requires models. So, AI tried to model the world.
- Comprehensive world models are intractiable. Frame problem. McCarthy and Hayes. Consequently, limited AI to toy problems.
- Holistic - contex exploiting - AI to attack understanding problems.
- Doing what we do without reasoning
- All intelligent agents are fallible. The world changes. Make mistakes. Limited by world complexity, not technology. More AGI means a more complex world.
- Recursive self-improvement is inherently limited. Understanding is a requirement.
- Consciousness is not required. A red herring? Writable long term memory is not required. Knowledgebase freeze after training. No need for multiple modes of sensory input. Text is fine. No mobility, embodiment, or enactment. Even agency is not required. They do what they're told. No personhood.
- The kind of AGI is more important than its IQ.
- The same algorithm works for other domains, it's just trainable.
- Human equivalence will eventually arise as technology progresses.
- Recognize what you've encountered before. Track failures and successes. Discard old patterns that aren't useful anymore.
- "You can only learn that which you almost know." --Patrick Winston
- The AGI software has to make its own models.
- Machines capable of autonomous reduction. Understanding.
- Understanding must be implemented without using models. Exploits context.
- Can operate on scant evidence, resistent to ambiguity and misinformation.
- Cognitive biases are an emergent property of understanding.
- Salience - Knowing what's important.
- There are sone model-based approaches that are useful.
- Understanding is not subconscious. Parsing is automatic, and understanding requires conscious thought. (Maybe it's precomputed?)
- Artificial General Intelligence == human equivalent intelligence, learning systems.
- Learning, reasoning, general problem solving capability.
- Capable of acquiring knowledge and learnign how to do new things. Ongoing, cumulative, contextual, adaptive. Autonomous. Experience based improvement and adaptation.
- Potentialy, many separate techniques can be applied in agent based architectures that aid one another. Inter-disciplinary.
- Top down versus bottom up approaches
- Optimize for certain cognitive biases temporarily?
- http://adaptiveai.com/faq/
- Human level cognition, not ability. "Helen Hawking."
- Tool use is a necessity. Goal directed. Over a dozen learning modes.