Themen

Whether man or machine: no intelligence without learning

February 11, 2019

Human intelligence is inextricably linked to a phenomenal ability to learn. So why should machine intelligence be any different? A system is artificially intelligent if it is capable to solve a problem independently by using an algorithm (i.e. a targeted solution rule, a "recipe"). For it to be able to really "intelligently" implement this, it must, as mundane as it may sound, "learn" everything from the very beginning, as a glance at AI practice shows.

Only fundamental re-learning turns an AI system into a versatile tool

Human intelligence is inextricably linked to a phenomenal ability to learn. So why should machine intelligence be any different?

A system is artificially intelligent if it is capable to solve a problem independently by using an algorithm (i.e. a targeted solution rule, a "recipe"). For it to be able to really "intelligently" implement this, it must, as mundane as it may sound, "learn" everything from the very beginning, as a glance at AI practice shows.

In the case of Geutebrück, the repertoire of AI applications includes the recognition of certain elements in the picture and their comparison with already known elements. To "recognize" something, the system must be trained to distinguish desired objects (hits) from undesired ones. As with human learning, training also means that the number of hits increases with many repetitions, the system "gains experience".

 

How do I tell my student

Computers are man-made constructions that initially do only what they are told to do. For them to be able to do this, people must define the environment in which they are to operate. This is all the more true for AI systems whose task is to make "intelligent decisions". When it comes to comparing data with existing knowledge, AI systems must first be taught which data is correct and relevant, which they should retain and process, and which they can discard.

This applies to words in written or audio form as well as to objects "seen" on photos or in video files. In medicine, for example, AI systems are used to process multitude of information: Written or spoken notes about symptoms and observations about their course, archived medical records and the results of imaging procedures (ultrasound, X-rays, MRI, etc.) are combined in powerful AI applications which compare and analyze them and draw conclusions about probable causes of disease.

Of course, the images of conspicuous skin changes, for example, say nothing to a computer at first. It first must learn the comparison with known patterns and "understand" what it has to pay attention to, when scanning an image grid: two-dimensional and three-dimensional structure, pigmentation, course of edges, etc. Finally, pixel-sized properties of an image must be compared with database information. Pixel by pixel must be decided: Is this characteristic on the list of interesting characteristics (because it deviates from the defined standard) or not?

This principle is applied wherever the analysis of data generated by cameras is involved. If, for example, a beverage crate or a whole pallet of crates is passed in front of the camera, an AI software can decide by lightning-fast comparison whether a space is empty or occupied when it has previously been enabled to make this type of decision through training.

In the security area, AI, which has become "smart" through machine learning, is used to check whether employees are wearing all elements of their protective clothing in accordance with regulations before entering sensitive parts of the building. To do this, the software must learn to recognize certain characteristics of the clothing (e.g. shape, color or pattern) and answer the question " Existing or non-existent?”

Even more ingenious must be the trained knowledge, if instead of objects like packages or bottles, faces are to be recognized, for instance in the case of intelligent Access Control. Although all these processes follow the same basic principle of comparing image information, the demands on the software capabilities vary.

 

New object? Back to the start

At this point, however, it is crucial to realize that the term "artificial intelligence" contains a high degree of Hybris. After training, the AI system can carry out just one single recognition task, i.e. precisely record the object or information that it has learned it is important through countless training runs. For example, if there are balls instead of bottles in a beverage crate and the order is to determine the number of bottles present, the software will simply report: No bottles existing!

But if you want to know how many balls there are in the crate instead of bottles, you are back at the beginning of the training for the student AI: All learning steps that were previously undertaken with the bottles must be repeated with balls. If you want to recognize the difference to human intelligence, you only have to imagine what this would mean for a mathematics examination: The examinee would be able to solve a problem that has already been practiced in all important details during preparation.

But the slightest deviation (e.g. a pentagon instead of a quadrilateral) would completely overtax him. Result: New lessons with exercises for the pentagon. Similarly, the AI must also be returned to the school desk with each significant detail change (different safety helmet, new bottle shape, etc.).

Companies that want to use AI systems should be aware of this fact. The convenient Copy & Paste logic: "What works with bottles will probably also work with printer cartridges" only applies if a comprehensive "further training" has been carried out at the supplier's training center beforehand. So much for the bad news. The good news is that if AI systems can use the information they learn with virtually no errors, they will be able to improve quality and reduce costs. If you're still worried that AI is witchcraft, you should know that true intelligence sits - even today - in FRONT of the computer!

 


Back