top of page

ARTIFICIAL INTELLIGENCE- FUTURE GROWTH AND ITS IMPLICATIONS

Abstract

Information technology has undergone a revolution whose credit goes to artificial intelligence. AI and its applications become employed in numerous domains of life of humans as an expert system that addresses the complex problems in various industries like research, commerce, health sector and advertising. This paper will give an outline of Artificial Intelligence history, goals, advantages, disadvantages and its use in human life application. This paper will examine how artificial Intelligence technologies are currently used in the field of health, for development of education and in the E-commerce sector. This research paper will also include how Artificial Intelligence can have adverse effects on human beings, myths regarding Artificial Intelligence and will also focus on how the future will look after upgradation of artificial intelligence.


Keywords

Technology, Intelligence


Introduction

Let’s first consider what intelligence means before moving to the notion of AI

Intelligence:

The capacity to acquire knowledge and apply skills.

The phrase “artificial intelligence” can be summarized as “replica of something natural (i.e., human beings) who is capable of gaining and using the information it has learned via exposure.” In a nutshell, artificial intelligence refers to machines that are capable of carrying out tasks that traditionally need human intelligence.

The Background of the Artificial Intelligence

  • Year 1943: Warren McCulloch and Walter Pits produced the first work that is today known as AI. They put forth an Artificial Intelligence model.

  • Year 1949: Donald Hebb presented a rule for updating the strength of the connections between neurons. Hebbian learning is the modern name for his rule.

  • Year 1950: An English mathematician named Alan Turing invented machine learning in 1950. In his book “Computing Machinery and Intelligence,” Alan Turing outlined a test in which he tested whether a Turing test can be used to determine whether a computer is capable of behaving intelligently on par with a human.

Birth of Artificial Intelligence (1952-1956)

  • Year 1955: Herbert A Simon and Allen Newell developed “Logic Theorist,” the “first intelligence program”. In addition to finding new and better proofs for some theorems, this program had proven 38 of 52 mathematical theorems.

  • Year 1956: American computer scientist John McCarty used the term “artificial Intelligence” for the first time at the Dartmouth Conference. Al was originally recognized as a legitimate academic discipline. High programming languages such as FORTRAN, and COBOL were created at this time only.

The year of Golden Age: Early Zeal (1956-1974)

  • Year 1956: Academics focused on creating algorithms that might resolve mathematical puzzles. In 1956, Joseph built ELIZA, the first chatbot ever.

  • Year 1972: In 1972, Japan produced WABOT-1, the first Intelligent humanoid robot.

AI’s initial winter (1974-1980)

  • The first AI winter occurred between 1974 to 1980, during which computer scientists had to contend with a severe lack of government funding for artificial intelligence research.

The surge of AI (1980-1987)

  • Year 1980- AI returned with “Expert System”, after its winter hiatus. Expert’s systems that can make decisions like a human expert have been programmed.

The second winter of AI (1987-1993)

  • Due to the exorbitant cost and lack of effective results, investors and the government ceased sponsoring AI research.

Developing intelligent agents (1993-2011)

  • Year 1997: The first computer to defeat a global chess champion was IBM Deep blue, which accomplished this feat in 1997 by defeating Gary Kasparov.

  • Year 2002: The Roomba vacuum cleaner marked the introduction of Artificial Intelligence into the home.

  • Year 2006: AI first entered the business sphere in that year. Additionally, businesses like Facebook, Twitter, and Netflix began using AI.

Deep learning, big data and artificial general intelligence (2011-present)

  • Year 2011: In 2011, IBM’s, a computer program that had to answer challenging questions and puzzles, won the quiz show jeopardy. Watson had demonstrated its ability to comprehend natural language and quickly find answers to challenging problems.

  • Year 2012: Google introduced the “Google Now” function for Android apps, which allowed users to receive information as predictions.

  • Year 2014: The chatbot “Eugene” achieved first place in the famed “Turing test” competition in 2014.

  • Year 2018: In the year 2018, IBM’s “Project Debater” excelled in a debate with two expert debaters on complicated themes.

Objective of Artificial Intelligence

These are the primary objective of artificial intelligence

  • Depict human intellect

  • Address Knowledge-intensive problems

  • Demonstrate an intelligent link between perception and action

  • Lessen human stress to subsequent size

Pros of Artificial Intelligence

1- Reduction in Human Error

The expression “human error” is defined as the inaccuracy which the person commits while performing the available task. A human while performing duty usually commits errors which he is not aware of due to his inapplicability to catch each and every error. While AI interface is pre designed in such a way that it will catch each and every error at every step, therefore AI will help in reducing human errors and increase in work efficiency.


2- Available around the clock

A typical human will labour for 8-10 hours every day, excluding breaks. People will rest after a specific period of time after their work for rest and better work performance and also will take leave on a leave on holiday, special occasions of family and health issues whereas AI will be available 24x7 which will increase the amount of work done in a predetermined amount of time.


3- Help with Technology

Some advanced firms use digital assistance to engage with people, reducing the requirement for human resources. Many websites use digital assistants to supply items that users seek but cannot get from humans. We can discuss our requirements with them. Some chatbots are created in such a way that it is difficult to tell if we are interacting with a chatbot or a real being.


Cons of Artificial Intelligence

1) High Production cost

Because AI is evolving every year, hardware and software must be updated on a regular basis to meet the most recent requirements. Amount spent on purchase of heavy machines creates a burden on the firms as it requires time to maintenance and repair for its proper functioning.


2) Making Humans Lazy

AI reduces the intervention of humans in every field and makes humans lazy and leads them to lethargic lifestyle which is the main reason for many diseases.


3) Unemployment

As AI replaces the majority of repetitive labour and other duties with robots, human interference is decreasing, which will present a significant challenge in employment standards. Every firm is looking to replace the least skilled employees with AI robots that can do similar jobs more effectively.


Application of Artificial Intelligence

1- AI Application in E-commerce

Personal Shopping

Technology based on artificial intelligence is used to create recommendations based on the user preference. Artificial Technology recommends or shows the list of items to the customers based on views, likes, history and interest. This helps in increasing sales, promotion, loyalty and demand of product. It also helps in developing friendly relations with the customer for further transactions.


2- Application of Artificial Intelligence in Education

Creating Smart Content

Artificial Intelligence can be used to digitise information such as video lectures, conferences, etc. For people in various grades, we can customize learning content and use various interfaces, such as animations. By producing and supplying audio and video summaries and comprehensive lesson plans, artificial intelligence contributes to the creation of a rich learning experience.


3-Applications of Artificial Intelligence in Life

Autonomous Vehicle

Automobile manufacturing industries use machine learning to train computers to think and act like humans when it comes to driving in any environment and object detection to avoid accidents.


How can Artificial Intelligence be Hazardous?

1- AI is programmed to perform something harmful

Systems with artificial intelligence that are programmed to kill are referred to as autonomous weapons. These weapons have the potential to easily result in massive casualties in the wrong hands. Furthermore, an AI arms race can unintentionally culminate in an AI conflict with many victims. These weapons would be intended to be very hard to “switch off”, so humans could possibly lose control of such a situation. This risk is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.


2- Misalignment between our objectives and the machines

The second prospect for AI as a risky technology is that it could have catastrophic outcomes if it is programmed to perform something good. Let's say we instruct the autonomous vehicles to “transport us to our destination as quickly as feasible”. The vehicle will promptly carry out our commands.Until we state that traffic rules should also be obeyed as we value human life, it may be dangerous for human life. Even though breaking the law or having an accident was not truly what we desired, it nevertheless completed the task we requested of it. As a result, if they ask to complete a task that does not comply with our criteria, highly intelligent computers can be destructive.


Future of Artificial Intelligence in different Sectors

1- E-commerce

In the near future, artificial intelligence will be crucial to the e-commerce industry. It will impact each and every sector related to E-commerce ranging from sales, to marketing, to distribution, finance. Future warehousing and inventory systems will likely be automated in the coming future.


2- Healthcare

AI will be essential in the healthcare industry for making quicker and more accurate diagnosis of illness. The use of AI will speed up and reduce the cost of finding new drugs. Additionally, it will increase patient involvement in their care and make booking appointments and paying bills easier and less error-prone. Apart from these advantageous applications, the biggest barrier for AI in healthcare is getting it accepted into routine clinical procedures.


3- Education

Artificial intelligence in the education sector isn’t going anywhere. Its presence in the classroom and in the school is going to increase rapidly. Students in the school will understand the topics well through AI because through AI apps students will get to know the sophisticated and complex topics in the best way possible.


Myths about Artificial Intelligence

1- Super Intelligence is not possible by the year 2100

The truth is that we can’t now determine whether superintelligence exists. Nothing is confirmed; therefore, it could happen in a few decades, a few centuries, or it could never. In a various survey, the question of “how many years from now do you think we’ll have human-scale AI with at least a 50% chance”. All of these studies come to the same conclusion. We don’t know because the top specialists in the world disagree. In a survey of AI experts conducted during the 2015 Puerto Rico AI conference, the (average) response was by 2045, however other experts gave estimates of hundred or even more years.


2- All human employment will be eliminated by AI

It is undeniably true that the emergence of AI and automation has the potential to dramatically disrupt the labour market, and in many cases, it is already doing so. It would be greatly oversimplified to think of this as a simple transfer of labour from humans to machines. People worry about losing their jobs as AI continues to advance because it has revolutionized industries across all sectors. However, in practice, AI has increased the number of work prospects for people across all industries. Every machine needs a person to run it. Although AI has replaced certain occupations, it still creates more jobs for people.


Conclusion

A greater knowledge of the differences between AI and humans is necessary to prepare for a future society in which Artificial Intelligence will play a far more prevalent influence on our lives. Artificial Intelligence and technology are one side of life that always surprises us with the new innovation and topic. It all depends upon human beings about the use of technology and how human beings employ the machines for better productivity so that the technology can be used for beneficial purposes rather than the negative impact. In the end, we’ve been in this research researching about AI definitions, history, pro’s, con’s, application, how can it be hazardous, future implications and some myths about AI.


--



This article is written by Animesh Nagvanshi of The Institute of Chartered Financial Analyst of India, Dehradun.


Recent Posts

See All

RIGHT TO BE FORGOTTEN

Considering all Fundamental Rights, we know that these rights are applicable to all citizens and there are no exceptions for the incompetency of its enforcement. “Right to be forgotten” is not specifi

THE ACCESS TO THE COURT

The cases brought in front of the court is in respect of the society and is related to the public only, so to make them public means to actually bring the answers and corrections out of the students w

Post: Blog2 Post
Anchor 1
bottom of page