In this tutorial, we will demonstrate how easy it is to create an AI assistant for iOS using Stream Chat. In this example, we will use the Anthropic and OpenAI APIs as our example LLM; however, developers are free to use whichever LLM provider they like and still benefit from Stream’s rich UI support for Markdown, tables, code samples, etc. To follow along with this tutorial, we recommend creating a free account and checking out our main iOS chat SDK tutorial as a refresher.
Talk is cheap, so here’s a video of the result:
We will use our new UI components for AI to render messages as they come, with animations similar to those of popular LLMs, such as ChatGPT. Our UI components can render LLM responses that contain markdown, code, tables, and much more.
We also provide UI for thinking indicators that can react to the new AI-related events we have on the server side.
The entire code can also be found here.
1. Project Setup
We must ensure a minimum version of 4.68.0
of the Stream Chat Swift UI SDK and 0.3.0
of the Stream Chat AI SDK to follow along. These SDKs contain UI components that will help facilitate the integration of AI into our chat feature. Our AI UI components support iOS 15 and above.
First, let’s create and set up the iOS project. Go to Xcode → File → New → Project, and name the project StreamChatAIAssistant
(or any other name you prefer).
Next, we add the required dependencies from StreamChat and the UI components.
We use the following steps to add the SDK via Swift Package Manager:
- Select "Add Packages…" in the File menu
- Paste the following URL: https://github.com/GetStream/stream-chat-swiftui
- In the option "Dependency Rule" choose "Up to Next Major Version", and in the text input fields next to it, enter "4.68.0" and "5.0.0" accordingly.
- Choose "Add Package" and wait for the dialog to complete
- Only select "StreamChatSwiftUI" and select "Add Package" again
Next, we should add the AI components. To do that, perform the same process with the following package URL: https://github.com/GetStream/stream-chat-swift-ai.
In the option "Dependency Rule" choose "Up to Next Major Version", and in the text input fields next to it, enter "0.3.0" and "1.0.0" accordingly.
With that, we have our iOS project ready and can add some code.
2. Setting Up the Channel List
Next, let’s present the Stream Chat channel list component. When a channel is tapped, we will open the chat view with the message list. To do this, we add the following code in our StreamChatAIAssistantApp
file:
12345678910111213141516171819202122232425262728293031323334353637383940414243import SwiftUI import StreamChat import StreamChatSwiftUI @main struct StreamChatAIAssistantApp: App { @State var streamChat: StreamChat @StateObject var channelListViewModel: ChatChannelListViewModel var chatClient: ChatClient = { var config = ChatClientConfig(apiKey: .init("zcgvnykxsfm8")) config.isLocalStorageEnabled = true config.applicationGroupIdentifier = "group.io.getstream.iOS.ChatDemoAppSwiftUI" let client = ChatClient(config: config) return client }() init() { let utils = Utils( messageListConfig: .init(messageDisplayOptions: .init(spacerWidth: { _ in return 60 })) ) _streamChat = State(initialValue: StreamChat(chatClient: chatClient, utils: utils)) _channelListViewModel = StateObject(wrappedValue: ViewModelsFactory.makeChannelListViewModel()) chatClient.connectUser( userInfo: UserInfo( id: "anakin_skywalker", imageURL: URL(string: "https://vignette.wikia.nocookie.net/starwars/images/6/6f/Anakin_Skywalker_RotS.png") ), token: try! Token(rawValue: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiYW5ha2luX3NreXdhbGtlciJ9.ZwCV1qPrSAsie7-0n61JQrSEDbp6fcMgVh4V2CB0kM8") ) } var body: some Scene { WindowGroup { ChatChannelListView( viewModel: channelListViewModel ) } } }
When we run the app at this point, we will see the channel list. When we tap on an item, we will be navigated to a channel view to see all the messages.
The code above creates the chat client and StreamChat
object, connecting a hardcoded user. We would want a proper setup for production apps where the user is provided after a login process. You can learn more about the client's setup in our docs.
3. Running the Backend
Before adding AI features to our iOS app, let’s set up our node.js backend. The backend will expose two methods for starting and stopping an AI agent for a particular channel. If the agent is started, it listens to all new messages and sends them to OpenAI. It provides the results by sending a message and updating its text.
We use the Anthropic API and the new Assistants API from OpenAI in this sample. We also have an example of function calling. By default, Anthropic is selected, but we can pass openai
as a platform
parameter in the start-ai-agent
request if we want to use OpenAI.
The sample also supports sending different states of the typing indicator (for example, Thinking
, Checking external sources
, etc).
To run the server locally, we need to clone it:
1git clone https://github.com/GetStream/ai-assistant-nodejs.git your_local_location
Next, we need to set up our .env
file with the following keys:
12345ANTHROPIC_API_KEY=insert_your_key STREAM_API_KEY=insert_your_key STREAM_API_SECRET=insert_your_secret OPENAI_API_KEY=insert_your_key OPENWEATHER_API_KEY=insert_your_key
The STREAM_API_KEY
and STREAM_API_SECRET
can be found in our app's dashboard. To get an ANTHROPIC_API_KEY
, we can create an account at Anthropic. Alternatively, we can get an OPENAI_API_KEY
from OpenAI.
The example also uses function calling from OpenAI, which allows us to call a function if a specific query is recognized. In this sample, we can ask, “What’s the weather like?” in a particular location. If you want to support this feature, you can get your API key from Open Weather (or any other service, but we would need to update the request in that case).
Next, we need to install the dependencies using the npm install
command.
After the setup is done, we can run the sample from the root with the following command:
1npm start
This will start listening to requests on localhost:3000
.
4. Use a Service for Backend Interaction
We’re returning to the iOS app and writing the necessary code to interact with the server we created in the previous chapter.
To do this, create a new file called StreamAIChatService
and add the following code:
123456789101112131415161718192021222324252627282930313233343536373839404142import Foundation class StreamAIChatService { static let shared = StreamAIChatService() private let baseURL = "http://localhost:3000" private let jsonEncoder = JSONEncoder() private let urlSession = URLSession.shared func setupAgent(channelId: String) async throws { try await executePostRequest( body: AIAgentRequest(channelId: channelId), endpoint: "start-ai-agent" ) } func stopAgent(channelId: String) async throws { try await executePostRequest( body: AIAgentRequest(channelId: channelId), endpoint: "stop-ai-agent" ) } private func executePostRequest<RequestBody: Encodable>(body: RequestBody, endpoint: String) async throws { let url = URL(string: "\(baseURL)/\(endpoint)")! var request = URLRequest(url: url) request.httpMethod = "POST" request.setValue("application/json", forHTTPHeaderField: "Content-Type") request.httpBody = try jsonEncoder.encode(body) _ = try await urlSession.data(for: request) } } struct AIAgentRequest: Encodable { let channelId: String enum CodingKeys: String, CodingKey { case channelId = "channel_id" } }
This service exposes methods that start and stop the AI agent for a given channel identifier.
5. Creating a Typing Indicator
We have covered the interaction with the backend and now switch our focus to building the UI. Let’s now add some code to handle the AI typing indicator. To do this, we create a new file TypingIndicatorHandler
, and add the following code:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687import Foundation import StreamChat import StreamChatSwiftUI class TypingIndicatorHandler: ObservableObject, EventsControllerDelegate, ChatChannelWatcherListControllerDelegate { @Injected(\.chatClient) var chatClient: ChatClient private var eventsController: EventsController! @Published var state: String = "" private let aiBotId = "ai-bot" @Published var aiBotPresent = false @Published var generatingMessageId: String? var channelId: ChannelId? { didSet { if let channelId = channelId { watcherListController = chatClient.watcherListController(query: .init(cid: channelId)) watcherListController?.delegate = self watcherListController?.synchronize { [weak self] _ in guard let self else { return } self.aiBotPresent = self.isAiBotPresent } } } } @Published var typingIndicatorShown = false var isAiBotPresent: Bool { let aiAgent = watcherListController? .watchers .first(where: { $0.id.contains(self.aiBotId) }) return aiAgent?.isOnline == true } var watcherListController: ChatChannelWatcherListController? init() { eventsController = chatClient.eventsController() eventsController.delegate = self } func eventsController(_ controller: EventsController, didReceiveEvent event: any Event) { if event is AIIndicatorClearEvent { typingIndicatorShown = false generatingMessageId = nil return } guard let typingEvent = event as? AIIndicatorUpdateEvent else { return } state = typingEvent.title if typingEvent.state == .generating { generatingMessageId = typingEvent.messageId } else { generatingMessageId = nil } typingIndicatorShown = !typingEvent.title.isEmpty } func channelWatcherListController( _ controller: ChatChannelWatcherListController, didChangeWatchers changes: [ListChange<ChatUser>] ) { self.aiBotPresent = isAiBotPresent } } extension AIIndicatorUpdateEvent { var title: String { switch state { case .thinking: return "Thinking" case .checkingExternalSources: return "Checking external sources" default: return "" } } }
This handler reacts to the events that the node.js server
sends, and based on that, it provides info about whether the typing indicator is shown and if a message is being generated.
6. Add UI for Handling the AI
Now that we’ve created the UI components, let’s extend Stream Chat’s Swift UI SDK to include these AI capabilities. To do this, we create a view factory to customize the chat views. Our docs provide more details about this approach.
We create a new file called AIViewFactory
and add the following code:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051import SwiftUI import StreamChat import StreamChatAI import StreamChatSwiftUI class AIViewFactory: ViewFactory { @Injected(\.chatClient) var chatClient: ChatClient let typingIndicatorHandler: TypingIndicatorHandler init(typingIndicatorHandler: TypingIndicatorHandler) { self.typingIndicatorHandler = typingIndicatorHandler } func makeMessageListContainerModifier() -> some ViewModifier { CustomMessageListContainerModifier(typingIndicatorHandler: typingIndicatorHandler) } func makeEmptyMessagesView( for channel: ChatChannel, colors: ColorPalette ) -> some View { AIAgentOverlayView(typingIndicatorHandler: typingIndicatorHandler) } @ViewBuilder func makeCustomAttachmentViewType( for message: ChatMessage, isFirst: Bool, availableWidth: CGFloat, scrolledId: Binding<String?> ) -> some View { StreamingAIView( typingIndicatorHandler: typingIndicatorHandler, message: message, isFirst: isFirst ) } func makeTrailingComposerView( enabled: Bool, cooldownDuration: Int, onTap: @escaping () -> Void ) -> some View { CustomTrailingComposerView( typingIndicatorHandler: typingIndicatorHandler, onTap: onTap ) } }
Let’s also add the defined views here.
First, we want to show a button on the top right corner, which will start and stop the AI agent. To do this, we add the following code (can be in the same file):
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475struct CustomMessageListContainerModifier: ViewModifier { @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler func body(content: Content) -> some View { content.overlay { AIAgentOverlayView(typingIndicatorHandler: typingIndicatorHandler) } } } struct AIAgentOverlayView: View { @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler var body: some View { VStack { HStack { Spacer() if !typingIndicatorHandler.aiBotPresent { Button { Task { if let channelId = typingIndicatorHandler.channelId { try await StreamAIChatService.shared.setupAgent(channelId: channelId.id) } } } label: { AIIndicatorButton(title: "Start AI") } } else { Button { Task { if let channelId = typingIndicatorHandler.channelId { try await StreamAIChatService.shared.stopAgent(channelId: channelId.id) } } } label: { AIIndicatorButton(title: "Stop AI") } } } Spacer() if typingIndicatorHandler.typingIndicatorShown { HStack { AITypingIndicatorView(text: typingIndicatorHandler.state) Spacer() } .padding() .frame(height: 80) .background(Color(UIColor.secondarySystemBackground)) } } } } struct AIIndicatorButton: View { let title: String var body: some View { HStack { Text(title) .bold() Image(systemName: "wand.and.stars.inverse") } .padding(.all, 8) .padding(.horizontal, 4) .background(Color(UIColor.secondarySystemBackground)) .cornerRadius(16) .shadow(color: Color.black.opacity(0.1), radius: 10, x: 0, y: 12) .shadow(color: Color.black.opacity(0.1), radius: 1, x: 0, y: 1) .padding() } }
Next, we add the view that renders the text message.
1234567891011121314151617struct StreamingAIView: View { @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler var message: ChatMessage var isFirst: Bool var body: some View { StreamingMessageView( content: message.text, isGenerating: typingIndicatorHandler.generatingMessageId == message.id ) .padding() .messageBubble(for: message, isFirst: isFirst) } }
This view listens to the changes in the typing indicator and uses the StreamingMessageView
from our SDK.
Finally, we want to support the possibility of stopping the generation of the message. To do this, we need to customize our composer view with an additional button when the generation is in progress:
123456789101112131415161718192021222324252627282930313233343536373839404142struct CustomTrailingComposerView: View { @Injected(\.utils) private var utils @EnvironmentObject var viewModel: MessageComposerViewModel var onTap: () -> Void @ObservedObject var typingIndicatorHandler: TypingIndicatorHandler init( typingIndicatorHandler: TypingIndicatorHandler, onTap: @escaping () -> Void ) { self.typingIndicatorHandler = typingIndicatorHandler self.onTap = onTap } public var body: some View { Group { if typingIndicatorHandler.generatingMessageId != nil { Button { Task { viewModel.channelController .eventsController() .sendEvent( AIIndicatorStopEvent(cid: viewModel.channelController.channel?.cid) ) } } label: { Image(systemName: "stop.circle.fill") } } else { SendMessageButton( enabled: viewModel.sendButtonEnabled, onTap: onTap ) } } .padding(.bottom, 8) } }
7. Connect UI Components with the SDK
Now, let’s connect everything. We go back to the StreamChatAIAssistantApp
file and make the following changes.
Define a new @State
variable there and call it TypingIndicatorHandler
.
12345struct StreamChatAIAssistantApp: App { // existing code @State var typingIndicatorHandler: TypingIndicatorHandler // existing code }
We update the init
method so it initializes the typingIndicatorHandler
and sets up a custom message resolver:
12345678910111213141516171819202122init() { let utils = Utils( messageTypeResolver: CustomMessageResolver(), messageListConfig: .init( messageDisplayOptions: .init(spacerWidth: { _ in return 60 }), skipEditedMessageLabel: { message in message.extraData["ai_generated"]?.boolValue == true } ) ) _streamChat = State(initialValue: StreamChat(chatClient: chatClient, utils: utils)) typingIndicatorHandler = TypingIndicatorHandler() _channelListViewModel = StateObject(wrappedValue: ViewModelsFactory.makeChannelListViewModel()) chatClient.connectUser( userInfo: UserInfo( id: "anakin_skywalker", imageURL: URL(string: "https://vignette.wikia.nocookie.net/starwars/images/6/6f/Anakin_Skywalker_RotS.png") ), token: try! Token(rawValue: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiYW5ha2luX3NreXdhbGtlciJ9.ZwCV1qPrSAsie7-0n61JQrSEDbp6fcMgVh4V2CB0kM8") ) }
We also need to update the body
to use the newly created view factory and react to channel selection events.
12345678910111213141516var body: some Scene { WindowGroup { ChatChannelListView( viewFactory: AIViewFactory(typingIndicatorHandler: typingIndicatorHandler), viewModel: channelListViewModel ) .onChange(of: channelListViewModel.selectedChannel) { oldValue, newValue in typingIndicatorHandler.channelId = newValue?.channel.cid if newValue == nil, let channelId = oldValue?.channel.cid.id { Task { try await StreamAIChatService.shared.stopAgent(channelId: channelId) } } } } }
Lastly, we also need the CustomMessageResolver
defined above, which will treat messages with the ai_generated
flag as custom attachments.
123456class CustomMessageResolver: MessageTypeResolving { func hasCustomAttachment(message: ChatMessage) -> Bool { message.extraData["ai_generated"] == true } }
Finally, we are ready to run the app! If we open a channel and start the AI agent, we can start asking some questions.
Conclusion
In this tutorial, we have built an AI assistant bot that works seamlessly with Stream Chat’s iOS SDK:
- We have shown you how to use our AI components for message rendering of LLM responses, such as markdown, code, tables, etc.
- We have shown how to create our server, which will start and stop AI agents that respond to user questions.
- You have learned how to customize our SwiftUI SDK to integrate these new AI features.
If you want to learn more about our AI capabilities, head to our AI landing page. Additionally, check our iOS docs to learn how you can provide more customizations to your chat apps.