Hey, have you seen the “A.I. Dilemma” video yet? It’s a conversation between Tristan Harris and Aza Raskin discussing the risks of existing A.I. capabilities and how A.I. companies are racing to deploy it without proper safety measures.
As a Technical Architect, I love and I’m always excited about new technologies, but after watching the video, I agree that we need to think about how to put guardrails in place to avoid misusing this incredible technology.
It’s not just about the benefits A.I. can bring, but also the risks it poses to society.
The video also talks about the need to upgrade our institutions to a post-A.I. world, which is a critical conversation as we move towards a more digitally connected future. The speakers encourage viewers to call their political representatives and advocate for holding hearings on A.I. risk and creating adequate guardrails.
If you’re interested in technology and its impact on our society, you should definitely check out the video below here:
It’s thought-provoking and reminds us of the ethical and moral implications that come with deploying A.I. in various fields.
But don’t stop there! I recommend enrolling in the “Foundations of Humane Technology” course from the Center for Humane Technology.
I recently started the course myself, and it’s full of excellent content that provides insight into how technology can and should be designed to serve humanity better.
You can find the course here: https://www.humanetech.com/course.
Together, we can build a future where A.I. serves humanity in a positive and responsible way.
P.S.: Thanks to ChatGPT for supporting me with writing this post and to Fotor for generating the featured image of this post 🙂
Leave a Comment