This question got recommended to me very recently, even though it was posted over two years ago. I’m nonetheless taking the liberty to answer this since the question has remained relevant, which is to say it has remained an open question. An open question is not the same as an unanswerable question—though some claim that questions such as OP’s should count as unanswerable.
At the moment, there are no “steps” one can follow to “create” AGI. What we have are speculations and theories about how it might emerge; some admit that they’re making theoretical claims, whereas others argue that they know with certainty how AGI might emerge.
For a very good and very detailed exploration of this, see Meghan O’Gieblyn’s “God, Human, Animal, Machine”—https://www.google.com/books/edition/God_Human_Animal_Machine/cNp0EAAAQBAJ?hl=en&gbpv=0
It sheds light on the metaphors we use today to talk about intelligence in general as well as about AI, AGI, and the Singularity. It argues that many of these metaphors are eerily similar to theological and religious metaphors. The arguments are beautiful and nuanced; I’d recommend the book. It also features some excellent comments about Ray Kurzweil’s “The Age of the Spiritual Machines.” Ultimately, it points out instances of transductive reasoning in some contemporary AGI theories.
For a great account of the similarities between the human unconscious and machine creativity, see this piece on n+1 (also by O’Gieblyn) – https://www.nplusonemag.com/issue-40/essays/babel-4/