Recently while stumbling around the internet looking for a way to pass the time while hiding out from the rain, I happened upon an IoT News article on a new Microsoft application for iOS devices.
The app is called ‘Seeing AI’ and, essentially, using the power of artificial intelligence, word, and facial recognition, the app is aimed towards helping those who are blind or visually impaired to analyze their surroundings. It’s a talking camera app that will read to you if you hold it up to documents or signs and even tell you who’s nearby and what they’re emoting.
I was immediately intrigued by this concept. It’s a pretty cool concept, especially for people who don’t have proper access to braille novels, textbooks, or letters from friends and relatives. The app can even recognize monetary notes so one won’t accidentally pay someone the wrong amount. However, the idea of facial recognition and narration made me a bit confused. I couldn’t think of a scenario in which it wouldn’t be awkward to hold up your phone to someone approaching you and have it announce “60 year old man looking happy”, especially if it was a 40 year old woman who was actually very angry at you for some reason or another. So, in order to give it a fair shot and maybe help myself understand the justification for some of this new app’s features, I downloaded it and gave it a shot.
The experimental app was free for my iPhone and very simplistic. It took me through a couple introductory slides and then I was set to go. When I opened up the camera, the back of my phone happened to be facing my keyboard and I immediately started to hear my phone read the what was on the keys. “Alt, option” and “command” were the first two that I heard. So far, so good! I was excited to find out what else the application could do.
Before I went and found a real, live human to test the app on, I went to Google Images and typed in, “happy person”. One of the first images to pop up was a picture of a young boy who was smiling. Before I realized that the app had turned on my flashlight, I held my phone up to the picture and it produced a very blurry image. In the picture, you could see the outside edges of the boy’s face, but the top of his head to his eyes was covered by the glare of the flash. Within a few seconds, the app spoke, “64 year old male looking happy.” It was cool that even though the image wasn’t very clear, it recognized that he was male and that he was smiling. I assume that the app registered him as older because you couldn’t see the hair on top of his head in the picture.
Next, I went to my sister and asked her to help me test the application out. We sat in her room and started to test its reliability and think of ways in which the facial recognition feature could be super useful.
Surprisingly, the facial recognition was pretty accurate, aside from the fact that it thought that I am about 20 years older than I actually am! It read our emotions very well, even when we tried making strange faces to test the app’s limits. Although we couldn’t come up with a solid scenario in which someone could use this feature, I imagine that if they had headphones in and held the camera to perhaps a banker or grocery store clerk, the app could help them to better gauge their surroundings and figure out where to steer a conversation.
Overall, I was so excited to see these changes in IoT happening in front of me! Although I know there is some controversy surrounding AI and that some believe we are giving computers too much power and emotion, this app is proof that artificial intelligence can be a great thing. It’s a bit strange that a computer is able to scan your features and “understand” what you are feeling. However, AI has come a long way and hopefully applications like Seeing AI will help people to understand that this new and advanced technology should be embraced and can have a very substantial, positive effect on the world. Download it, check it out, and it just might be the perfect tool for someone you know!