SHI300: Essence (Choose)

Essence

Premonition

As computers become more and more powerful, they won’t be substitutes for humans: they’ll be complements.


AS MATURE INDUSTRIES stagnate, information technology has advanced so rapidly that it has now become synonymous with “technology” itself. Today, more than 1.5 billion people enjoy instant access to the world’s knowledge using pocket-sized devices. Every one of today’s smartphones has thousands of times more processing power than the computers that guided astronauts to the moon. And if Moore’s law continues apace, tomorrow’s computers will be even more powerful. Computers already have enough power to outperform people in activities we used to think of as distinctively human. In 1997, IBM’s Deep Blue defeated world chess champion Garry Kasparov. Jeopardy!’s best-ever contestant, Ken Jennings, succumbed to IBM’s Watson in 2011. And Google’s self-driving cars are already on California roads today. Dale Earnhardt Jr. needn’t feel threatened by them, but the Guardian worries (on behalf of the millions of chauffeurs and cabbies in the world) that self-driving cars “could drive the next wave of unemployment.” Everyone expects computers to do more in the future—so much more that some wonder: 30 years from now, will there be anything left for people to do? “Software is eating the world,” venture capitalist Marc Andreessen has announced with a tone of inevitability. VC Andy Kessler sounds almost gleeful when he explains that the best way to create productivity is “to get rid of people.” Forbes captured a more anxious attitude when it asked readers: Will a machine replace you? Futurists can seem like they hope the answer is yes. Luddites are so worried about being replaced that they would rather we stop building new technology altogether. Neither side questions the premise that better computers will necessarily replace human workers. But that premise is wrong: computers are complements for humans, not substitutes. The most valuable businesses of coming decades will be built by entrepreneurs who seek to empower people rather than try to make them obsolete.

Problem

We don’t understand intelligence


Purpose



General Intelligence
Learn by doing — a chance to learn
Theory in practice — learn by doing
Deep dive
Learning curves
Procedural knowledge
Active learning


Learning by Doing

Learning by doing refers to a theory of . This theory has been expounded by American philosopher and Latinamerican pedagogue . It's a hands-on approach to learning, meaning students must interact with their environment in order to adapt and learn.[1] Freire highlighted the important role of the individual development seeking to generate awareness and nurture critical skills.[2] Dewey implemented this idea by setting up the .[3] His views have been important in establishing practices of . For instance, the learn-by-doing theory was adopted by and applied to the development of .[4]
"For the things we have to learn before we can do them, we learn by doing them."
— Aristotle (Nicomachean Ethics By Aristotle)


Procedural Knowledge

Procedural knowledge (also known as , and sometimes referred to as practical knowledge, imperative knowledge, or performative knowledge)[1] is the knowledge exercised in the performance of some task. Unlike (also known as "declarative knowledge" or "propositional knowledge" or "knowing-that"), which involves knowledge of specific facts or propositions (e.g. "I know that snow is white"), procedural knowledge involves one's ability to do something (e.g. "I know how to change a flat tire"). A person doesn't need to be able to verbally articulate their procedural knowledge in order for it to count as knowledge, since procedural knowledge requires only knowing how to correctly perform an action or exercise a skill.[2][3]


Questions:

Is AGI possible
If so, when?
What’s getting in our way right now?
If not, why not?
Is General Intelligence/Strong AI the goal here?
If not, what is our goal? What’s the anticipated application of narrow AI?
In which ways do we see AI as complementary to Human Development?
What is our best argument for the displacement of human capital (ie. automating away jobs)?
How do we make sure AI has the right moral intuitions?
What’s our answer to the Trolley Problem?
Who gets to decide?
How do we make sure we don’t all turn into paperclips?
Can we talk about Gato?
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.