Skip to content

Latest commit

 

History

History
56 lines (34 loc) · 2.58 KB

README.md

File metadata and controls

56 lines (34 loc) · 2.58 KB

Platform

Use the cognitive capabilities of your iOS App with Core ML and Watson | With Watson

This iOS application uses the Watson Visual Recognition service to analyze images taken on iPhone or album photos using a user-trained model on the Watson Studio platform (also known as the Data Platform). In this way, you can understand how to integrate a Watson API into a Swift project with the use of the Swift SDK.

Access Dropbox to download the available Datasets, or create your own set of images.

If you want to read this content in Brazilian Portuguese, click here.

Components and technologies

  • Visual Recognition: Analyze images for scenes, objects, faces and other content.
  • Watson Studio: Data platform with collaborative environment and toolkit for Data Scientist, Developers and SMEs.
  • Cloud Object Storage: Object storage service in the cloud.
  • Swift: Open-source programming language for Apple devices.

How to configure and run locally

To install the application it is necessary that you have installed the latest version of Xcode and Cocoapods (dependency manager for Swift and Objective-C projects).

1. Get the app

git clone https://github.com/victorshinya/with-watson.git
cd with-watson/

2. Download all app dependencies

pod install

3. Open the "With Watson.xcworkspace" file

open With\ Watson.xcworkspace

4. Open the Constants.swift file

Open the file and fill it in with the Watson Visual Recognition credential and the trained Visual Recognition template - done within Watson Studio.

5. Execute with the CMD + R command (or press the play button)

Now run the project on the Xcode simulator or on your own iPhone / iPod / iPad. Remember that it is not possible to open the camera inside the simulator.

License

MIT License

Copyright (c) 2018 Victor Kazuyuki Shinya