Source: medium.datadriveninvestor.com
Recent events, emphasized by the mainstream, make us think that we are at the dawn of a first real ‘sentient’ Artificial Intelligence, which is commonly defined by the acronym AGI, which stands for Artificial General Intelligence.
We know that this is a reality far from being realized and fulfilled, but we risk, in this mixture of longing and fear, not looking at the current and already viable risks in the vast field of Artificial Intelligence research.
A very recent study, published by Thomas Hellström and Suna Bensch, entitled ‘Apocalypse Now: No Need for Artificial General Intelligence‘ tells us precisely this: ‘The worst case scenario is that AGI becomes self-aware and prioritizes its own existence over people, who are seen as a threat because they can decide…
Read More at medium.datadriveninvestor.com