Complications with Vision Processing

There have been a number of advancements in recent years with robotic vision.

As seen within this recently published article,

Improved Generalized Belief Propagation for Vision Processing

Well..  hold on.

One of my goals with this blog I feel is to offer translation services.

I want to take scientific articles like this and summarize them in normal speak for people who may get lost with the mumbo-jumbo. It is not to say that the average reader isn’t intelligent enough to comprehend what is written there- instead, it is to say that the authors who publish these articles write in a way that is basically ineligible to people not in their field. After all, they aren’t normally authors, they are scientists so why should what they write be easy to read?

Ok, back to GBP- or Generalized Belief Propagation.

I can go into why Robots are used, and the benefits of a Robot to a person and yadda yadda- but that’s an entire post in and of itself and i’m here now to talk about GBP so..  sorry, you’ll have to get that one later.

All you need to be concerned with right now is that some robots need to see.

A machine doing certain types of jobs needs to be aware of what is in front of it, so it can act accordingly.

Simple yes?

A Robot in the engineering world- ‘looks’ at what is in front of it self. It will try to form patterns and recognize what is there based off of pre designated conditions in it. It then has a NO GO/GO command option.

A NO GO, is when the Robot looks in front of it self, and can not recognize what is there. It will then follow the command for a NO GO, wether that is to send up an error report, or to try from a different angle, or to just sit there and do nothing.

A GO, means the Robot can make sense out of what is in front of it. It recognizes the thing there as matching the pattern in its memory and will then follow the next command which is associated with that pattern which could be a number of things, like to demolish that, or to connect piece A with piece B, or to turn a part over.

There are a number of things involved here. First, the camera involved. Is it electro-optical (normal) or Infra-red? Does the machine have vision like Predator and is checking variances in the levels of heat?

Who knows- and who cares. It all is the same thing from an engineering standpoint. The Robot (a computer) is getting input, just like if you typed in a command to your computer. The computer (Robot) will either recognize the command and move along, or if will tell you you’re an idiot for not typing in a command it knows.

That part where the Robot looks through its database and everything is a huge hiccup the Robotic Vision community have always had.  It’s not a show stopper at all, and there’s been a number of amazing developments, but the time it takes for a system to try and match what it is ‘seeing’ and what it knows is far too long to be practical in many cases.

For instance, if you held your hand up and looked at it- you recognize that as a hand. You could imagine taking a picture of that and putting that in to a computer and saying HEY, any time you see that, this is a HAND.

Now what if you turned your hand ninety degrees? You still know it’s a hand, but will the computer? How about if you used your right hand instead of your left. How about looking at the difference between the hands of a child and a senior citizen, or a light skinned male with an dark skinned female with red nail polish, or a thin person with a heavy person. You as a person can simply still say all of the above are hands. But would a computer? How about if you made a fist?

These comparisons are crucial in robotic vision and one of the reasons the industry still has much work to do.

This article, which comes from a group in China working on methods to increase vision efficiency- proposes a new method for algorithm processing to not only speed up the process but increase accuracy.

One of the ways they accomplish this is by when the robot “scans” an object for recognition, the two directions will be static and parallel along each axis. The purpose here is to match what the computer sees to what it has on file, and this method along with a formula provided (math math math, I know) increases output.

So- why does this matter to A.I.?

On two fronts actually

First, progression in the ways a computer visually scans an item is critically important in producing a cognitive system. Yes, blind people are people and exist and can think- I get that- but that is irrelevant to A.I. progression.

The code and algorithms which tell a system how to see need to be fully developed and far beyond where they are now- but the processing to tell a system what it is seeing is flawed here.

I have seen others literally tell a robot to look at something and then they say, “Hey Robot, that is a fish”. So then the Robot looks for itself, sees what it is that makes a fish, and goes from there. That’s more on the right track, but the brains of these robots need to be designed to allow for variations in classes. Fish could mean any number of thing. Dead fish, live fish, drawn fish, clown fish, barracuda- all are fish.  The leg work is going to be in making these A.I. systems understand that classification levels.

Man and/or Woman = Human

Human and/or Woman ≠ Man

Human ≠ Man

Man and/or Woman = Mammal

Wolf = Mammal

Wolf ≠ Man

So, that whole classification system needs to be able to be understood with everything by a computer.

ANIMAL – MAMMAL- HUMAN- MALE – CHILD

ANIMAL – REPTILE – SNAKE – COBRA- FEMALE

ANIMAL – MAMMAL – HORSE- PERSIAN – MALE

You as a human can form those groupings and understand that. A computer will have to learn that. Not be told what it is looking at, but learn HOW to see what is in front of it.

you forgot your science

You are a human.

At least- anything that can currently read this blog and can understand when I use the term “YOU” as a defining function of the reader existing would only be a human.

Computers can capture what I am writing. They can perform limited word groupings and analysis. They could try to associate meanings by taking words and correlating them with other pre-loaded words-  but they do not actually READ.

Computers, are not literate.

So, back to my point. You are a human.

Your ancestors were human.

Well..  to an extent.

Upwards of 97% of scientists agree now that humans evolved from simpler animals throughout pre-history back to early shrew-like mammals and even further back and back to simple multi-celled flora. The odd 3% refuse on a basis of religion. Whether or not evolution fits with a person’s belief of a higher being or not is irrelevant. Reality doesn’t care much for how a person feels about a situation, much like it ignored the heliocentric model of the Roman Catholic church, and the Greek theories on sacrificing to Poseidon would ensure safe travels over sea. Your personal feelings on how the cosmos should work is meaningless when describing how they actually do work.

Just about here, you are wondering…  Wait a minute, I thought I was going to read about robots and shit.

Well let me get to that.

When science fiction authors create a story about real Artificial Intelligence- it is almost always one of two things.

1. They build a super machine with so much memory and capacity and such an enormous database- that is just.. ends up becoming ‘aware’.

2. They build a machine, flip a switch and voila there is this walking talking machine that knows how to walk/see/hear/talk, is basically invincible to a baseball bat to the chest and is invariably smarter than a human…  bbuuuttt always has a flaw like not loving or lack of intuition or something.

Now, let me start by criticizing those methods.

The first idea is just playing to an audience which thrives on over the top scenarios and something more in line with fantasy than science fiction- and it seriously does not deserve any further critique when looked at scientifically because they were not actually intended to be scientific (and thus not within the realm of the real, more in the realm of the feel).

The second method of developing Artificial Intelligence is, although widely accepted and never questioned, flawed at every angle. I would like to ask you to think of this for a moment.

When you sit in a room and listen to the person in front of you talk, there is actually a complex process going on through your ears and brains which you are not aware of. You focus. You are able to listen to that person, and ignore the other people in the room talking, ignore the sounds of your paper shuffling and the footsteps of the people walking, and disregard the ever present air conditioner spewing out noise. While there, you can switch your focus.. you can ‘listen’ to another conversation and ignore the person in front of you. You can pay attention to the paper. You do this…easily.

Easily NOW.

At birth- you were incapable of this.

You have to learn how to HEAR. Not to detect sound… you could always do that. But to filter signal to noise, to focus, to not actually listen to everything at the same time.

This takes from when you begin to hear, while still in the womb, until many months in to infancy.

And hearing isn’t the only sense or ability like this. At birth, all human sight is extremely blurry and really only notices differences in the intensity of light and motion (which is the interruption of light signatures when regarded to sight).

These are examples of things learned through simple usage. A baby, or kitten, a pony..  whatever animal you consider- they are at birth given these capabilities which they just develop over time because in their genetic code it states that seeing is this and hearing is that.

So, we have abilities learned through repetitive usage, and we have abilities learned through imitation. A toddler toddles because for the previous few months they see older humans walking and say “Holy cow man, that’s way better than the way I shuffle around on the ground, I need to try that.”  Parents or adult figures have a huge role in this. We stand up the child, we put them in bouncers, we encourage their development. If a baby is never introduced to the concept of standing, never sees others do it, and never gets help doing it- they will never stand up and walk.

Walking is a developed skill we take for granted, but it took us each near a year to really figure out and get good enough at to not continually bust our collective asses.

Now- there are a number of more things like these three examples which humans, or any other advanced life form do or know which needs to be developed. It takes time. No animal is born as an adult.

And that is my main point. If there is going to be a true Synthetic Intelligence. True Artificial Intelligence. A real ‘being’ made of computer hardware and programing- it would need a development phase. Simply turning on and being fully aware, hearing, seeing, knowing, understanding- it wouldn’t know what to think of it all, it would be too much. Children spend years learning their first native language because there’s so much that must be experienced and wouldn’t make any sense at all without experience.

Also- just as you developed from an earlier species- the scientists in a lab would need to start small.  Why does artificial ‘life’ necessitate a human equivalent? That is actually the wrong place to start.  The whole thing should begin with a small, able to fit in your hand sized creation. From there, tweak it, move on and on.

Just as a human is coded with their DNA and RNA to provide the basis for everything about them- a computer has code.

I understand it’s not as simple as “well, start me up one of them human programs in a PC Tower”-  but the framework is the same. General academic consensus now holds that our thoughts and mind are simply how our brain interprets the input coming in from our senses, and then works with that data to plan the future or entertain us in the mean time.

Give a properly programmed computer sensory input and their processors theoretically then could do that as well. The key is the programming, and development time.

Humans are giant computers made out of different materials.

There is a shell, with guts and programming.  The key is getting the right guts, and the right programing.

In my next article, I will look at an advancement in related technology to my theory and comment accordingly.

Thank you.

I’d like to point out that there have been a very limited few in Science Fiction who have not taken the turn on awareness route for granted. In Star Trek: The Next Generation, the character Data takes steps to ‘develop’ his daughter, but it was very limited- most likely due to time limitations on the television show. There have also been some who have gone a completely different route- I am simply speaking of the majority.

Welcome.

Welcome to my first blog post on the subjects relating to Synthetic Intelligence.

My intention with this blog is to provide commentary on scientific and technological developments in the fields of Artificial Intelligence and robotics.

For advancements in the robotics fields, I will focus on mechanized adaptability, sensory analysis, natural language processing, nano-technology capabilities, and self-repairing capabilities.

The brunt of my writing will be on Synthetic/Artificial Intelligence. I will mainly provide my own theories and methods to produce cognitive machines based on an evolutionary development cycle I shall speak briefly to in my next post.

I do not pretend that the average person out there finds this subject interesting, but I do hope the layman can gather some insight on to the theory of what I do feel will be a milestone of human progression, while the fan of such subjects can possibly learn something or see the unique ideas I put forth as an exciting possibility.

Either way, I thank you for your time.