Social Question

MiachelCombrink's avatar

Steps to create Artificial General Inteligence (AGI)?

Asked by MiachelCombrink (22points) March 27th, 2020
7 responses
“Great Question” (3points)

What is required to create A.G.I., what steps, hurdles, elements, questions, answers, etc.
I know this hasn’t been found possible yet, but any thoughts and discussion (personal and professional) on the matter would be greatly appreciated.

Observing members: 0
Composing members: 0

Answers

Inspired_2write's avatar

A wise person first to make sure ethical procedures to calulate every possibility that could happen and every results. Impossible at this point in time.

Jaxk's avatar

It seems important to know what you’re going to do with it once you’ve got it. It puts a whole new meaning to:
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”
It gets tricky once computers become self aware.

bernd's avatar

A real AI does not have a software program, but needs hardware comparable to our perception system, especially the acoustic system and memory. It shouldn’t be a problem screwing something like that together haha.

mrainer's avatar

This question got recommended to me very recently, even though it was posted over two years ago. I’m nonetheless taking the liberty to answer this since the question has remained relevant, which is to say it has remained an open question. An open question is not the same as an unanswerable question—though some claim that questions such as OP’s should count as unanswerable.

At the moment, there are no “steps” one can follow to “create” AGI. What we have are speculations and theories about how it might emerge; some admit that they’re making theoretical claims, whereas others argue that they know with certainty how AGI might emerge.

For a very good and very detailed exploration of this, see Meghan O’Gieblyn’s “God, Human, Animal, Machine”—https://www.google.com/books/edition/God_Human_Animal_Machine/cNp0EAAAQBAJ?hl=en&gbpv=0
It sheds light on the metaphors we use today to talk about intelligence in general as well as about AI, AGI, and the Singularity. It argues that many of these metaphors are eerily similar to theological and religious metaphors. The arguments are beautiful and nuanced; I’d recommend the book. It also features some excellent comments about Ray Kurzweil’s “The Age of the Spiritual Machines.” Ultimately, it points out instances of transductive reasoning in some contemporary AGI theories.

For a great account of the similarities between the human unconscious and machine creativity, see this piece on n+1 (also by O’Gieblyn) – https://www.nplusonemag.com/issue-40/essays/babel-4/

Entropy's avatar

I think we’re alot further from a AGI than people think. ‘Futurists’ have been predicting this as being ‘close’ for 30 years now. I think AI is getting better and better when it comes to tasks where we can write a defined test to ‘select’ a learning program to develop a neural net suited to that task. But that’s very different from an AGI.

I’m also not sure whether AGI is all that desirable a destination. AI is going to be an important part of our coming technology base…but is AGI all that useful? As an achievement it’s impressive, but does it really make life better?

mrainer's avatar

@Entropy – that’s a good question – “would it really make life better?” It’s an important ethical question we need to ask ourselves. ML, AI, or data companies might be hiring more of or relying more on those who bring ethical concerns to the forefront, but ultimately they seem to answer that question with an unequivocal yes.

Answer this question

Login

or

Join

to answer.

Mobile | Desktop


Send Feedback   

`