PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2023-11-10T17:30:15+00:00

Deploy CoreML Models on the Server with Vapor


Deploy CoreML Models on the Server with Vapor

Drew Althage

9 min readNov 7, 2023

Get the benefits of Apple’s ML tools server-side.

Press enter or click to view image in full size

SwiftUI client showing image classification results

Recently, at Sovrn, we had an AI Hackathon where we were encouraged to experiment with anything related to machine learning. The Hackathon yielded some fantastic projects from across the company. Everything from SQL query generators to chatbots that can answer questions about our products and other incredible work. I thought this would be a great opportunity to learn more about Apple’s ML tools and maybe even build something with real business value.

A few of my colleagues and I teamed up to play with CreateML and CoreML to see if we could integrate some ML functionality into our iOS app. We got a model trained and integrated into our app in several hours, which was pretty amazing. But we quickly realized that we had a few problems to solve before we could actually ship this thing.

  • The model was hefty. It was about 50MB. That’s a lot of space to take up in our app bundle.
  • We wanted to update the model without releasing a new app version.
  • We wanted to use the model in the web browser as well.

We didn’t have time to solve all of these problems. But the other day I was exploring the Vapor web framework and the thought hit me, “Why not deploy CoreML models on the server?”

Apple provides a few pre-trained models, so today we’ll deploy an image classification model on the server behind a REST API with Vapor and create a SwiftUI client to consume it.

Foreword

This prototype is just that, a prototype. It’s not meant to be a production-ready solution. It’s meant to be a proof of concept. There will be warnings in the console, and the code won’t be very clean, but it will work and hopefully get your wheels turning.

If you want to skip all this, or if you do want to follow along, you can find the source code for this project on GitHub.

Okay, disclaimers over. Let’s get started!

Requirements

  • Xcode 15
  • macOS 14
  • Homebrew
  • Apple Developer Account + Physical Device for testing

Getting Started

First start by creating a new directory that will house our Xcode workspace. We’ll call it coreml-web-api .

cd ~/Desktop && mkdir coreml-web-api && cd coreml-web-api

Now let's install Vapor and bootstrap a brand new server. See the docs for more details.

brew install vapor
vapor new server -n
open Package.swift

We want our users to be able to upload images for classification so add a new route called classify that supports this. In server/Sources/App/routes.swift , clear out all that generated boilerplate, and add in the following:

import CoreImage
import Vapor

func routes(_ app: Application) throws {
app.post("classify") { req -> [ClassifierResult] in
let classificationReq = try req.content.decode(ClassificationRequest.self)
let imageBuffer = classificationReq.file.data
guard let fileData = imageBuffer.getData(at: imageBuffer.readerIndex, length: imageBuffer.readableBytes),
let ciImage = CIImage(data: fileData)
else {
throw Errors.badImageData

let classifier = Classifier() // we'll add this in a sec

return try classifier.classify(image: ciImage)

enum Errors: Error {
case badImageData // or whatever

struct ClassificationRequest: Content {
var file: File

Also, bump up the max file size allowed for uploads in configure.swift :

import Vapor

// configures your application
public func configure(_ app: Application) async throws {
app.routes.defaultMaxBodySize = "10mb"

try routes(app)

Alright, now let's write up a Classifier API. First, head over to Apple’s ML page to download a pre-trained model of your choosing. In this demo, I’m using the Resnet50 model. We’ll add this to the package in just a moment.

Add a new file called Classifier and drop in the following:

import CoreImage
import Vapor
import Vision

struct Classifier {
func classify(image: CIImage) throws -> [ClassifierResult] {
let url = Bundle.module.url(forResource: "Resnet50", withExtension: "mlmodelc")!
guard let model = try? VNCoreMLModel(for: Resnet50(contentsOf: url, configuration: MLModelConfiguration()).model) else {
throw Errors.unableToLoadMLModel

let request = VNCoreMLRequest(model: model)

let handler = VNImageRequestHandler(ciImage: image)

try? handler.perform([request])

guard let results = request.results as? [VNClassificationObservation] else {
throw Errors.noResults

return results.map { ClassifierResult(label: $0.identifier, confidence: $0.confidence) }

enum Errors: Error {
case unableToLoadMLModel
case noResults

struct ClassifierResult: Encodable, Content {
var label: String
var confidence: Float

Let’s break this down.

First, we load the model. Adding a CoreML model to a package is not super straightforward. We need to compile the .mlmodelourselves and add some files to Sources/. We’ll go over that in a few but this wonkiness explains why loading the model might look slightly different from adding one to a standard Xcode project.

Once the model is loaded, we prepare the request and the request handler; then we do the classification. To send the results as JSON to the client, we need to remap the results to a structure that conforms to Encodable and Content .

Adding the Model to the Package

This part definitely took me the longest to figure out. Unfortunately, this step is pretty manual; we can’t just drag and drop the model into the project. So, at the root of the server package, add a new folder called MLModelSource and add the Resnet50.mlmodel file here. Create another folder called Resourcesat server/Sources/App/Resources/ .

Now, we need to compile the model, add the Swift class to sources, and include the .mlmodelc in the package bundle. The compilation steps are repetitive so we’ll place them in a Makefile target. In the project root, create a Makefile:

# ~/Desktop/coreml-web-api/
touch Makefile

And add a compile_ml_modeltarget:

compile_ml_model:
cd server/MLModelSource && \
xcrun coremlcompiler compile Resnet50.mlmodel ../Sources/App/Resources && \
xcrun coremlcompiler generate Resnet50.mlmodel ../Sources/App/Resources --language Swift

Next, add this to the executable target inPackage.swift file:

resources: [
.copy("Resources/Resnet50.mlmodelc"),

The target should look like this:

.executableTarget(
name: "App",
dependencies: [
.product(name: "Vapor", package: "vapor"),
resources: [
.copy("Resources/Resnet50.mlmodelc"),

Okay, now from the project root, run the compile_ml_model target:

make compile_ml_model

Awesome!!! Now, we have an amazing server that supports classifying uploaded images using the Resnet50 model. Before we move on to the creating the client, we need to adjust the App scheme to make the server available to a physical device on your network.

Open up the scheme editor, and add serve --hostname 0.0.0.0 to the run arguments.

Press enter or click to view image in full size

Sweet. Now, we’ll create a client to do the uploading.

iOS Client

OK, in Xcode go to File -> New -> Project and add an iOS app to the workspace. We only need SwiftUI, no tests or SwiftData. I’m giving mine a really clever name of CoreMLWebClient … poetic.

Great. Now, let's do a little config work. Since we’re going to be using the camera, we need to update the Info.plist with the Privacy — Camera Usage Description key.

Press enter or click to view image in full size

[...]


Original source

Reply