I think alignment is necessary, but not enough for AI to go well. By alignment, I mean us being able to get an AI to do what we want it to do, without it trying to do things basically nobody would want, such as amassing power to prevent its creators from turning it off.
I believe that the best governance structure for allowing more democratic control and transparency is open source and open governance with multiple stakeholders, rather than secrecy and proprietary development. Clearly that's a two-edged sword - bad actors will have more access to capabilities - but it ends up in a place where AGI is governed more effectively. Traditional open source governance is quite technocratic so that's not likely to be a big improvement. I believe that DAOs (decentralized autonomous organizations) offer a more promising model - although it's important that governance and participation are set up to allow for democratic and broad stakeholder participation. I also think there's more promise in that kind of bottoms up grass roots effort than a UN-style government organization... I am very skeptical we'd get nations to cede this kind of power to such a body and I believe it will be incredibly hard to police/detect unaligned AGI efforts especially once the ability to create an aligned AGI is known.
I suspect that such an AGI would initially have a lot of difficulty convincing government to make significant changes at least ethically - I think it would take a lot of capability advances from it to offer compelling incentives. It's probably a lot easier for it (and Onus using it) to come up with deceptive and manipulative ways to achieve goals (e.g., create panic, demos that promise utopia, etc.). It's also likely that there would be a kind of arms race among governments to let the AGI operate locally in return for benefits/priority, so likely some smaller governments would allow it to build capability.