The story of Baby's Vision

I’m writing this article just a few weeks before my daughter Neena turns one year old. It’s been an incredible journey seeing her grow up, learn new things and see how she started recognizing her mom and dad and eventually flashed us a smile. But I’m not writing to share her story, but rather to share how she inspired me to build an app called Baby’s Vision (available for iOS devices only at this moment).

Screenshot Baby’s Vision

It wasn’t long after the first time she looked at me at the hospital with big and very dark eyes: I started to wonder what she would be able to see. For the weeks to come, I kept thinking about it, especially as she slowly started to recognize other things around the house. The sleep deprivation kept me from digging deeper, but as soon as we were able to catch up on sleep a tiny bit, I started researching.

Based on articles from the American Optometric Association, Wikipedia and educational videos on Youtube, I was able to somewhat imagine how she sees the world. However, as I quickly learned, mainly because I’m a visual person. Talking to friends (with and without kids) and family, it became clear that I was pretty much alone with that.

So at this point, the inner nerd was satisfied, as I got the answers I was looking for. But the educational nerd in me wasn’t. Since I’ve been working on a variety of interactive projects in the past professionally, and truly enjoy building apps and experiences that teach kids (and anyone who feels like they’re still a kid), I had to grab my laptop and hack together an app that would simulate my daughter’s vision.

That was when Baby’s Vision was born. Every minute I was able to spare, I built out the app, showed it to friends to get feedback and eventually published it to the App Store. I did a bunch of experiments to leverage the modern camera system that the fancy new phones bring with them, to truly show correct depth perception, but quickly learned that this was too much and fresh parents care more about battery life than 100% accuracy. So I decided to go the middle ground, tweaking blur values and optimizing the app to not drain the phone’s battery.

For the developers, I used CoreImage filters, so most computation can run natively on the GPU and for the AR feature I’m using ARKit. The AR feature is still early (it shows a beach ball floating in the room, so you can walk around it and get closer to it to see how it get’s less blurry), but I think it’s a cool use of augmented reality technology.

So far, the app has been pretty successful and I keep getting emails and texts from users and friends’ friends who absolutely love it. If you love it, please take a moment to leave a positive review in the App Store. In case you have feedback to make the app even better, please send me a message. I’m always open to feedback and I promise you’ll get a personal response.

You can download it here (I’d like to mention that the app itself is free, but if you’d like to access all milestones, there’s a small in-app-purchase fee to unlock all features for life).

Now that Neena has fully developed her vision, I’m off to the next challenge: Help her develop her vocal skills. She’s being raised bilingual, where I speak to her in German exclusively. Turns out, there are not that many tools and apps out there for this very specific (and yet not uncommon) scenario of raising a child bilingually. I’ll share more about this project in another article shortly, so please check back soon.