Log in
Register
Search
Search titles only
By:
Search titles only
By:
Forums
New posts
Forum list
Search forums
Leaderboards
Games
Our Blog
Blogs
New entries
New comments
Blog list
Search blogs
Credits
Transactions
Shop
Blessings: ✟0.00
Tickets
Open new ticket
Watched
Donate
Log in
Register
Search
Search titles only
By:
Search titles only
By:
More options
Toggle width
Share this page
Share this page
Share
Reddit
Pinterest
Tumblr
WhatsApp
Email
Share
Link
Menu
Install the app
Install
Forums
Discussion and Debate
Discussion and Debate
Physical & Life Sciences
AI & Trust
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="SelfSim" data-source="post: 77576879" data-attributes="member: 354922"><p>Hmm .. I've just watched more of the video. They used AI to synthesise new theoretical models for testing. </p><p></p><p>In the scenario of your above post, you're talking about possible serious results emerging from the implementation of production technologies. There's a myriad of testing and step-by-step testing in between the initial modelling (or prototyping) and production phases. Its sort of hard to imagine those steps not involving observation of safety protocols. It looks a bit like hyperbolisation without mentioning the in-between steps(?)</p><p></p><p>The '<em>trust'</em> issue I think you're concerned about, only seems to become an issue where the step-by-step process moving from theoretical modelling to production of technologies is completely ignored(?) If anything, I think I'd say that <em>'trust'</em> comes from the incremental buildup of knowledge gained throughout that overall end-to-end process, which by post production phases, is far removed from where AI was used and so too was any<em> 'trust'</em> in AI(?)</p><p></p><p>Its an intriguing question, but I don't think scientfically thinking humans would just blindly stumble forward with something they are completely ignorant about.</p></blockquote><p></p>
[QUOTE="SelfSim, post: 77576879, member: 354922"] Hmm .. I've just watched more of the video. They used AI to synthesise new theoretical models for testing. In the scenario of your above post, you're talking about possible serious results emerging from the implementation of production technologies. There's a myriad of testing and step-by-step testing in between the initial modelling (or prototyping) and production phases. Its sort of hard to imagine those steps not involving observation of safety protocols. It looks a bit like hyperbolisation without mentioning the in-between steps(?) The '[I]trust'[/I] issue I think you're concerned about, only seems to become an issue where the step-by-step process moving from theoretical modelling to production of technologies is completely ignored(?) If anything, I think I'd say that [I]'trust'[/I] comes from the incremental buildup of knowledge gained throughout that overall end-to-end process, which by post production phases, is far removed from where AI was used and so too was any[I] 'trust'[/I] in AI(?) Its an intriguing question, but I don't think scientfically thinking humans would just blindly stumble forward with something they are completely ignorant about. [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
Discussion and Debate
Discussion and Debate
Physical & Life Sciences
AI & Trust
Top
Bottom