The Rise of Artificial Intelligence
Generally, when most hear the words artificial intelligence, the first thing that comes to mind is menacing robots. However, this is not accurate. Artificial intelligence is an imitation of human knowledge that is programmed in different machines, using algorithms, to simulate the thought process and actions of humans. The first concepts for AI started in the 1950s, where many mathematicians, scientists, and philosophers explored the possibility of machines that problem solved and made decisions like humans. Over the years, AI research has grown by leaps and bounds, and many programs created, are used daily. From virtual assistants like Siri to the use of autopilot on an aircraft, artificial intelligence is everywhere.
If you need assistance with writing your essay, our professional essay writing service is here to help!
Further research on AI raises awareness and helps those gain the knowledge to understand how beneficial or dangerous this technology could be. As artificial intelligence continues to develop, many question whether A.I. will become more advanced and have an advantage over humans, causing singularity to occur. Thus, leading to the evolution of superintelligences that become smarter than humans. Instead of taking over humanity, AI and humans can merge intelligence, and work together, integrating for maximum efficiency. AI confinement protocols can serve as the solution to slow down the effects of singularity and allow humans to benefit from the superintelligences formed safely.
History of Artificial Intelligence
Artificial Intelligence derives from an area of computer science used to create intelligent programs that can problem solve, plan, learn, recognize speech, etc. According to Weber (2019), this includes areas, “which human brainpower is simply too limited or would take too long to be useful” (p. 4, para. 10). Back then, when the ideas of artificial intelligence were first presented, computers were the primary source looked at to house such intellectual programs. However, in the 1950s, computers were costly and could not store commands given, which is essential in the development of the AI system. After many modifications, computers could now not only execute all sorts of commands but had the ability to store them as well. From this modernization, many were able to explore the different realms of artificial intelligence more in-depth than before.
Five years after the first notions of artificial intelligence were discussed, birth was given to the first proof of that concept and was called Logic Theorist. Created by Cliff Shaw, Herbert Simon, and Allen Newell, “The Logic Theorist was a program designed to mimic the problem-solving skills of a human and was funded by Research and Development (RAND) Corporation” (Anyoha, 2017, p. 3). Considered a thinking machine, the Logic Theorist was engineered to prove theorems from Principia Mathematica using information relating to the world around it. This cognitive simulation is said to be the first working AI program that showed some aspect of problem-solving like humans. The team’s progress from such a creation had a very significant impact on the field of artificial intelligence and served as the catalyst that assisted in the flourishing of AI research over the next half-century.
Today, artificial intelligence is encountered daily. From email filters to those nudging suggestions, from social media accounts to customer service chatbots. The application of artificial intelligence is said to help make life more efficient. People can connect with friends and family, request rides using sharing apps, and navigate to places unknown. It seems that AI has made it possible for people to accomplish much more by working together with its intelligent software.
For the future of artificial intelligence, AI language and self-driving cars are now in progress. AI language will cover conversations that are smoother between humans and automated machines, as well as being able to translate a conversation between two languages in present time. On the other side of the horizon, concepts of driverless cars have been announced, and much work is being down to produce a proven concept within the next 20 years. In the beginning, “artificial intelligence started as a field whose goal was to replicate human-level intelligence in a machine” (Brooks, 1988, para. 2). The goal now of those researching and creating AI is to produce programs that surpass human capabilities cognitively in many if not all tasks.
Artificial Intelligence is given the capability to learn in an intelligent way, by permitting machines the ability to learn from experience and in turn, perform tasks that are similar to that of humans. Some components that make up artificial intelligence are; machine learning, deep learning, and cognitive computing. These elements give AI the skills to process massive amounts of big data that is generated in the world, regularly. IT professionals write algorithms that allow AIs to sift through and analyze data analytics that could be useful for things such as figuring out consumer habits for a business. According to an article on AI and big data, “first data is fed into the AI engine, marking the AI smarter. Next, less human interaction is needed for the AI to run properly, and finally, the less AI needs people to run, the closer society comes to realizing the full potential of artificial intelligence” (Maryville University, n.d., para. 13). With little to no human dealings and massive amounts of data being fed to AI systems, they can take on more tasks, solve more problems, and over time, learn more to become smarter than humans.
At the rate artificial intelligence creators are going now, further advancements in the AI realm will continue to tip the scale allowing much room for singularity to occur. According to Chalmers (2012), “a singularity is a rapid increase in intelligence to superintelligence (intelligence of far greater than human levels), as each generation of intelligent systems creates more intelligent systems in turn” (p. 147). Ultimately, this event would signal that moment in time when artificial intelligence changes and exceeds human intellect altogether. This would be due to the development of AI that learns to better itself and create other versions of artificial intelligence that are improved versions of itself as well. At this point, humans will no longer bear control over these systems. The control would shift to that of the superior superintelligence.
The rise of artificial intelligence is a global societal issue because people have different perspectives on how dangerous or beneficial this technology could be for the world. Some praise AI for being the best invention today. According to a study done by the Brookings Institute (2018), “Thirty-eight percent felt artificial intelligent robots would make their lives easier in the next five years” (para. 8). Facilitating conversations via chat bots, assisting to perform light housekeeping, and controlling homes with the touch of a button are what most look forward to. Whereas others are skeptic and believe it is a mistake giving AI the ability to learn on a more profound level, leading to artificial intelligent domination that will be dangerous for humanity. It is said that the singularity is inevitable, and as creators continue to better previous versions of AI models, the systems will continue to learn rapidly, allowing them to become smarter than humans. Nevertheless, it does not have to be negative. Transparency will play a crucial role in the solution to this problem. With the help of creators being more transparent about the possible uses for AI and the needs of humanity, it could be shown that both AI and humans can work together.
Think about how resourceful it would be to work with AI that can offer assistance with daily operations, and does not have to be extensively trained, looked after, or told what to do repeatedly. Artificial intelligence is created to have the capacity and speed of processing information quicker and far beyond what any human can process. Through the application of deep learning, neural connections of AI, “focuses on computational models for information representation that exhibit similar characteristics to that of the neocortex” (Arel, Rose, & Karnowski, 2010, para.10). Algorithms used in deep learning are designed to mimic neural networks of the human brain, where various layers allow the AI to learn data repeatedly, refining on each outcome. The more the AI learns from the application of deep learning, the better they will be able to perform and make decisions. This could be beneficial in the example of AI and humans that work together. However, for this to occur, it must be understood that adequately knowing how to control the artificial neural networks of an AI system is the key to managing and slowing down the effects of singularity.
From the start of artificial intelligence, there have been significant concerns surrounding AI like; what it is exactly, how it is used, and where does it get the data it learns from. According to Rossi (2019), “these concerns are among the obstacles that hold AI back or that cause worry for current AI users, adopters, and policymakers” (para. 6). The solution would be to improve the transparency of those who create artificial intelligence, giving humanity a chance to understand just how AI is advancing. Being more transparent about plans for AI could help people comprehend artificial intelligence more, not be so fearful of what AI can do, and make the connections on how the world is already interacting with AI in various ways. Overall, improving people’s attitudes towards this type of technology.
Once the connection is made, delving deeper into that connection with AI can help to further make people’s lives more efficient by working together for the better. For example, physicians have busy schedules. AI is now being integrated to aid doctors with various tasks, including analyzing swabs faster, designing treatment plans, and assisting with data mining through medical records. Artificial intelligence has been helpful in aiding in the tedious tasks mentioned above. Working together with AI gives doctors the time to perform other functions and work more effectively and efficiently.
The second solution would be to create an AI confinement protocol that would be used to uphold the safety of artificial intelligence by sealing the AI in hardware that prevents it from exchanging information with outside sources. The AI would only be able to learn from data that is programmed by the creator. If the exchange of information were to occur, someone could get their hands on crucial information and or develop hazardous software such as an artificially intelligent virus. According to an article on leak proofing the singularity, “the consequences would be unquestionably disastrous and would pose risks currently unseen in malware with subhuman intelligence” (Yampolskiy, 2011, para. 9). A malicious virus can spread fast, go without being detected, and can infect and encrypt all file systems. Here, there is a possibility of turning the AI system against the creator who controls it.
So, properly controlling A.I. and using it for good would be essential. Keeping the AI’s data incased in the system it is in, will help to contain its information to its own network, while still upholding its intellectual integrity. The creator would program algorithms that would allow the AI to take part in machine learning but on a limited scale since the AI will not be able to learn too much on its own. This would help to slow down the effects of singularity because the artificial intelligence would not be able to send or receive information from other sources, thus allowing humans better control over how quickly an artificial intelligence can learn.
Ethical Outcomes and Issues
Over the years, many ethical concerns, both positive and negative, have formed regarding the use of artificial intelligence. One positive ethical outcome from the solution of transparency is that people will feel more connected to artificial intelligence and the process of how AI is used. One ethical issue related to this outcome would be that the creators do not feel the need to be transparent with the world about the workings of artificial intelligence. They can be reluctant or refuse to share information altogether. Another ethical issue from this outcome would be that people will still not feel that they gained a better understanding of what AI is and continue making ill-informed decisions about the use of artificial intelligence.
A negative ethical outcome from the solution of creating a confinement protocol is that creators will not be able to find a way to seal the artificial intelligence in its own hardware properly. Thus, not stopping the AI from exchanging information with other networks. An ethical issue related to this outcome would be that creators are not equipped with the knowledge to know how to confine and control artificial intelligence correctly. This could signal why a working confinement protocol has not quite been mastered yet. Another ethical issue related to this outcome would be that the artificial intelligence would be able to continue to learn without the assistance from human input, and with the exchanging of information AI can grow smarter, and humans would no longer be able to control the systems.
In conclusion, AI has a long history of concepts and research that have bought many systems into fruition. Intelligent software and algorithms are programmed into many of the systems we use daily, and it seems that artificial intelligence is the wave of the future. Further research on this topic is imperative to helping others gain knowledge of this trending phenomenon, which could lead to better understanding and more informed decisions on whether AI is beneficial or dangerous to humans. The rise of artificial intelligence is a global societal issue for all because AI impacts everyone around the world. There are countless advancements with artificial intelligence, and as the projects get better, more data and more in-depth learning is needed for AI to work at its best. Deep learning and neural networks mimic the biological structure of the human brain, allowing artificial intelligence to learn without human input. From this singularity can occur, pushing AI to the evolution of superintelligences that become smarter than humans. However, instead of taking over humanity, AI and humans can merge intelligence, and work together, integrating for maximum efficiency. AI confinement protocols will serve as the solution to slow down the effects of singularity and allow humans to safely benefit from the superintelligences formed.
- Anyoha, R. (2017, August 28). The history of artificial intelligence. Retrieved from http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
- Arel, I., Rose, D., Karnowski, T. (2010). Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE Computational Intelligence Magazine, 5, 13-18. doi: 10.1128/IAI.02190-14
- Books, R. (1988). Intelligence without representation. Artificial Intelligence, 47(1-3), 139-159. doi:10.1016/0004-3702(91)90053-M
- Chalmers, D. (2012). The singularity: a reply. Journal of Consciousness Studies, 19, 147-167 Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.303.2938&rep=rep1&type=pdf
- Maryville University. (n.d.). Big data is too big without AI. Retrieved from https://online.maryville.edu/blog/big-data-is-too-big-without-ai/
- Rossi, F. (2019). Building trust in artificial intelligence. Online Journal of International Affairs. Retrieved from https://jia.sipa.columbia.edu/building-trust-artificial-intelligence
- Weber, R.M. (2019). “Hey, Siri! Is Artificial Intelligence the Ultimate Oxymoron?” Journal of Service Professionals, 73(4), 46-50. Retrieved from http://eds.b.ebscohost.com.proxy-library.ashford.edu/eds/pdfviewer/pdfviewer?vid=0&sid=0e762827-59a5-41d9-acd1-3ac36fa8df60%40sessionmgr101
- West, D. (2018). Brookings survey finds 52 percent believe robots will perform most human activities in 30 years. Brookings Institution. Retrieved from https://www.brookings.edu/blog/techtank/2018/06/21/brookings-survey-finds-52-percent-believe-robots-will-perform-most-human-activities-in-30-years/
- Yampolskiy, R.V. (2012). Artificial intelligence confinement problem. Journal of Consciousness Studies, 19(1-2), 194-214. Retrieved from http://eds.a.ebscohost.com.proxy-library.ashford.edu/eds/detail/detail?vid=0&sid=d36bea2a-d977-4ebe-ab2e-1ed5ff4c338a%40sessionmgr4007&bdata=JnNpdGU9ZWRzLWxpdmUmc2NvcGU9c2l0ZQ%3d%3d#AN=2012-08628-014&db=psyh