My question is: How much do you trust AI solutions?
In general, do not blindly trust. I would think a programmer could include AI explanations of its solutions, plus helping you to increase your capability to do and understand the solution, yourself.
But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?
How can you trust what you don't understand? We trust God because He has proven Himself, hopefully. Likewise, I would think we need to trust based on proof, not blindly trust.
Plus, like I offer > a competent program should be competent enough to give you principles which verify its answer, and provide highlighting of items of its process which help you to evaluate its solution and see and learn from how it arrived to its solution.
For example. If you are using AI to know how to dismantle the tangle of steel of the fallen Maryland bridge, you should have access to how AI calculated, using the information given to it. And perhaps test it first with data from some cases to which you know the answer and see if it gives you those best answers, then trust but still verify how it says to take apart the tangle in Maryland.
And for medical situations . . . this can be more tricky. There may be less predictable principles in the mix, since there is biology and individual patient conditions involved, and perhaps personalities and special capabilities of medical workers to consider.
And politics, of course, involve various personalities which AI and/or humans cannot possibly understand in order to predict which course will come out with which instructions.
And especially in politics, you likely will not have all the info on what makes each politician tick, not to mention how ones can rig AI to be prejudiced just like human intelligence can be.
So, you want some way to efficiently verify the process. Maybe even have an indepentently invented AI program "who" can review a process and detect shortcuts and even deliberate rigging of information input . . . a fact checker included.
Because we know how highly intelligent people are capable of lying! Likewise . . . for all we know, a highly capable AI program could be highly capable of lying!!