Connect with us
ADVERTISEMENT

Room Decoration

Decorous: launching a generative AI iOS app | by Ryan Gordon | Jan. 2023

ADVERTISEMENT

I’ve been developing mobile apps in one form or another since 2008. I’ve been part of teams that built successful products from the ground up, such as Vine, Dresr (acquired by Google), and Rodeo (Area 120, acquired by YouTube), but I’d never launched anything on my own.

Over the holidays, I decided to see what it takes to launch an app solo. The result was Decorous: AI Home Makeovers. I wanted to write it down highlights of what I have learned along the way to share with all of you.

When decorating our newly purchased home, I struggled to visualize empty rooms in different styles. The Pinterest photos look great, but will the style match my room? One morning in the shower I was struck by the thought: can do generative AI virtual staging or at least give me some inspiration? Could I build something where users could upload a photo of their room, pick a style, and let the generative AI do the rest?

If you’re not familiar, StableDiffusion is a powerful text-to-image diffusion model. Scatter models work by progressively removing an image so that each round of noise reduction produces an image that more closely matches what you entered. In the base case, it starts with pure noise and generates something new from just text.

StableDiffusion works a bit differently by compressing images into a vector with far fewer dimensions used to represent the information, allowing it to run faster and on less powerful hardware. Noise is similarly added to this vector representation and the resulting vectors are decoded into an image.

If that’s not cool enough, you don’t have to start with pure noise eitherso with its img2img functionality, you can provide a starting imagechoose how much noise you want to add to it and let it rip effectively determine how much it will change the input image. Sounds pretty useful design recommendations.

You can also fine-tune it by teaching it to reproduce specific topics, which is what those AI apps for avatars/profile pictures are built on.

The first step was messing with StableDiffusion see what I could get out of it. The plan was to engineer the prompt, get the buttons right, and then build the UX with the right abstraction.

I thought the easiest and most flexible way to get it up and running was to deploy it in a Google Compute Engine VM. My goal was to run it on cloud hardware so I could play with it, dock it, and deploy it to GKE to scale. Turns out it was a pain to install the dependencies correctly. When it finally worked, experimenting was just as painful, having to transfer the resulting images from the VM before seeing the results, turning knobs by changing command-line parameters. Every 24 hours I ran the instance cost $14, or $400 a month. Not ideal.

If you just want to mess around with SD, you can use StabilityAI’s web interface, dreamstudio.ai. After some free credits, you need to buy more. With a graphics card with enough VRAM (~10GB+) there are also great open source web interfaces like the one from Automatic1111. Fortunately, my gaming PC was up to the challenge. The web UIs allowed me to get very quick feedback on prompts and settings, and iterate to a point I was happy enough to use in the product.

ADVERTISEMENT

I have to say that after the idea for the product came to my mind, I looked around to see what was out there. InteriorAI.com already saw success and was very similar to my idea. You upload an image, choose the type of room, style, quality and inventiveness. I was a bit disappointed when I realized how unoriginal my own thoughts are, but I also saw the creator, @levelsio tweet about how he is in another space, AI avatars, saying there’s value in meeting people where their photos are: on their phones.

ADVERTISEMENT

I usually do iOS development anyway, so I thought it was enough of a point of differentiation. I looked in the App Store and there are a few almost direct rip-offs of InteriorAI.com (even one called InteriorAI!) and I’m pretty sure these rip-offs are draining its API. Anyway, I felt like I could at least build a better UX than them and it gave me enough confidence to get started. And I was excited that users could take photos anywhere in the home and see results on the spot.

In addition to being on the phone, I also wanted to differentiate myself in customization, and decided to add the ability to choose a color palette in addition to room type and style.

During the holidays I designed and built the app. The app is pretty simple: a single ViewController, an image collection area, a button to choose or take a photo, a few list selections for room type, and a submit button. I had to figure out where to run the AI ​​model. Since running my own server on GCE would cost $400 a month, I looked for alternatives.

Fortunately, there are a handful of APIs that take care of model hosting, operations, load balancing, and autoscaling. And new ones are added every day. When I started, I chose between banana.dev, replicate.com, and StabilityAI’s own API. Banana’s didn’t have an out-of-the-box img2img model and StabilityAI’s billing was a pain as they had to buy tokens manually. Replicate has simple meter billing (you pay for the seconds you run the model) with autopay and has a ready-to-use StableDiffusion 2.1 img2img configuration.

With the settings I determined earlier, each run on Replicate costs about $0.02 per generation. I would have to pass 200,000 requests per month to make it worth running the GCE instanceand even then it wouldn’t scale because each request used most of the VRAM so I would have to serialize all requests.

I wanted to avoid having to go through the App Store review if I wanted to update my prompt or change which API I’m using, so I’ve also implemented a small app in AppEngine to handle the user customizations, combine them into a prompt and send it to Replicate. Surprisingly, I found Apple’s reviews to be super fast. I even got through it in less than a few hours, so kudos to them! I used StableDiffusion to generate an app icon, compile some screenshots and send them to Apple.

I’m proud of what I’ve made. Check it out and let me know what you think! I welcome feedback, suggestions or complaints. I haven’t had much use yet, but it’s been great going through the process.

  • Output images look great and high quality
  • Useful for inspiration or exploring high-level themes for your room
  • Pretty magical for people who haven’t played with generative AI yet
  • I did what I set out to do! And from zero to launch in less than a month!
  • Images don’t necessarily meet close scrutiny – a closer look reveals AI artifacts.
  • Color palettes generally don’t have commonly used names, so the palette customization doesn’t work as well as I’d hoped, but it’s useful enough to guide it.
  • It’s hard to balance inventiveness with preserving the structure of the room. I want it to have the freedom to fill empty rooms with furniture and decor, but I don’t want it to invent doors and windows where that’s impossible.

I plan to merge all my lessons into a video coursecomplete with a skeleton of the app and everything you need to launch a subscription-based generative AI-powered iOS app, including the details, from getting an LLC, a business Apple Developer Account, app- icons, screenshots, websites and more.

If you want to hear more, you can follow me on twitter or get notified when it’s out.