This tutorial will demonstrate how easy it is to build an AI assistant with Stream React Native Chat SDK. While this tutorial features Anthropic and OpenAI APIs as the LLM provider, you can integrate any LLM service with Stream and still benefit from the same features, such as generation indicators, markdown support, tables, etc. No matter the scale of your project, Stream offers a free Maker Plan, so let’s dive in.
Talk is cheap, so here’s a video of the result:
We will use our new UI components for AI to render messages as they come, with animations similar to those of popular LLMs, such as ChatGPT. Our UI components can render LLM responses that contain markdown, code, tables, and much more.
We also provide UI for thinking indicators that can react to the new AI-related events on the server side.
The entire code can also be found here.
1. Project Setup
We must ensure a minimum version of 5.44.1
of the Stream Chat React Native. These SDKs contain UI components that will help facilitate the integration of AI into our chat feature. Note that this version of the SDK does not yet support the React Native New Architecture. It will be available from v6 and onwards, for which you can find the latest release candidate here.
As a final note, this tutorial will be done with React Native Community CLI - however, feel free to replicate it with Expo as well, and the results will be the same.
First, we create and set up a new React Native project using our SDK. First, let’s initialize a new React Native project:
1npx @react-native-community/cli@latest init Test --version 0.75.4
You can also install the latest React Native version, but if you're using version 5.44.1 of our Chat SDK, make sure you run it with the new architecture disabled.
Next, let’s install our SDK:
1yarn add stream-chat-react-native
As well as its required dependencies:
1yarn add @react-native-community/netinfo @stream-io/flat-list-mvcp react-native-fs react-native-gesture-handler react-native-image-resizer react-native-reanimated react-native-svg
For dependency specific setup, you may refer to our Application Level Setup section of our Stream Chat React Native tutorial.
Next, since we’ll need a navigation library you can install React Navigation as described in their docs.
Finally, to install everything you may run:
12yarn install npx pod-install
at the root of your new project.
After all this, we can run the app by running yarn start --reset-cache
to start Metro and running yarn run ios
for iOS or yarn run android
for Android and we should see the React Native welcome screen.
2. Setting Up the Channel List
Next, let’s present the Stream Chat channel list component. When a channel is tapped, we will open the chat view with the message list. We will also setup a basic navigation stack to make things smoother. To do this, we add the following code in our App.tsx
file:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102import React from 'react'; import { NavigationContainer } from '@react-navigation/native'; import { createStackNavigator, StackNavigationProp } from '@react-navigation/stack'; import { Text, SafeAreaView } from 'react-native'; import { useChatClient } from './hooks/useChatClient.ts'; import { AppProvider, useAppContext } from './contexts/AppContext.tsx'; import { Chat, OverlayProvider, ChannelList, Channel, MessageList, MessageInput, } from 'stream-chat-react-native'; import { StreamChat, ChannelSort } from 'stream-chat'; import { chatUserId, chatApiKey } from './chatConfig'; import { GestureHandlerRootView } from 'react-native-gesture-handler'; const chatInstance = StreamChat.getInstance(chatApiKey); const filters = { members: { $in: [chatUserId], }, }; const sort: ChannelSort = { last_updated: -1 }; const chatTheme = {}; type ChannelRoute = { ChannelScreen: undefined }; type ChannelListRoute = { ChannelListScreen: undefined }; type NavigationParamsList = ChannelRoute & ChannelListRoute; const Stack = createStackNavigator<NavigationParamsList>(); const ChannelListScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelListScreen'>; }> = (props) => { const { setChannel } = useAppContext(); return ( <ChannelList filters={filters} sort={sort} onSelect={(channel) => { const { navigation } = props; setChannel(channel); navigation.navigate('ChannelScreen'); }} /> ); }; const ChannelScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelScreen'>; }> = () => { const { channel } = useAppContext(); if (!channel) { return null; } return ( <Channel channel={channel}> <MessageList /> <MessageInput /> </Channel> ); }; const NavigationStack = () => { const { clientIsReady } = useChatClient(); if (!clientIsReady) { return <Text>Loading the chats ...</Text>; } return ( <Stack.Navigator> <Stack.Screen name='ChannelListScreen' component={ChannelListScreen} /> <Stack.Screen name='ChannelScreen' component={ChannelScreen} /> </Stack.Navigator> ); }; export default () => { return ( <SafeAreaView style={{ flex: 1, backgroundColor: 'white' }}> <AppProvider> <GestureHandlerRootView style={{ flex: 1 }}> <OverlayProvider value={{ style: chatTheme }}> <Chat client={chatInstance}> <NavigationContainer> <NavigationStack /> </NavigationContainer> </Chat> </OverlayProvider> </GestureHandlerRootView> </AppProvider> </SafeAreaView> ); };
Additionally, we can create three more files to make things a bit clearer. Those would be:
useChatClient.ts
12345678910111213141516171819202122232425262728293031323334import { useEffect, useState } from 'react'; import { StreamChat } from 'stream-chat'; import { chatApiKey, chatUserId, chatUserName, chatUserToken } from '../chatConfig'; const user = { id: chatUserId, name: chatUserName, }; const chatClient = StreamChat.getInstance(chatApiKey); export const useChatClient = () => { const [clientIsReady, setClientIsReady] = useState(false); useEffect(() => { const setupClient = async () => { try { if (!chatClient.userID) { chatClient.connectUser(user, chatUserToken); } setClientIsReady(true); } catch (error) { if (error instanceof Error) { console.error(`An error occurred while connecting the user: ${error.message}`); } } }; setupClient(); }, []); return { clientIsReady, }; };
AppContext.tsx
12345678910111213141516171819202122import React, { ReactNode, useState } from 'react'; import type { Channel } from 'stream-chat'; export type AppContextValue = { channel: Channel | undefined; setChannel: (channel: Channel) => void; }; export const AppContext = React.createContext<AppContextValue>({ setChannel: () => {}, channel: undefined, }); export const AppProvider = ({ children }: { children: ReactNode }) => { const [channel, setChannel] = useState<Channel>(); const contextValue = { channel, setChannel }; return <AppContext.Provider value={contextValue}>{children}</AppContext.Provider>; }; export const useAppContext = () => React.useContext(AppContext);
chatConfig.ts
1234export const chatApiKey = 'zcgvnykxsfm8'; export const chatUserId = 'rn-ai-test-user'; export const chatUserToken = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoicm4tYWktdGVzdC11c2VyIn0.5pc_6W73-BxPxDbnnK3fx-F8WUb6lXxBOiq7IiIbaS4'; export const chatUserName = 'Bob';
When we run the app at this point, we will see the channel list. When we tap on an item, we will be navigated to a channel view to see all the messages. For the given configuration above, we’ve prepared some channels for testing purposes however you’re high encouraged to create your own application within the Stream Dashboard and try it out with your credentials.
3. Running the Backend
Before adding AI features to our iOS app, let’s set up our node.js backend. The backend will expose two methods for starting and stopping an AI agent for a particular channel. If the agent is started, it listens to all new messages and sends them to OpenAI. It provides the results by sending a message and updating its text.
We use the Anthropic API and the new Assistants API from OpenAI in this sample. We also have an example of function calling. By default, Anthropic is selected, but we can pass openai
as a platform
parameter in the start-ai-agent
request if we want to use OpenAI.
The sample also supports sending different states of the typing indicator (for example, Thinking
, Checking external sources
, etc).
To run the server locally, we need to clone it:
1git clone https://github.com/GetStream/ai-assistant-nodejs.git your_local_location
Next, we need to set up our .env
file with the following keys:
12345ANTHROPIC_API_KEY=insert_your_key STREAM_API_KEY=insert_your_key STREAM_API_SECRET=insert_your_secret OPENAI_API_KEY=insert_your_key OPENWEATHER_API_KEY=insert_your_key
The STREAM_API_KEY
and STREAM_API_SECRET
can be found in our app's dashboard. To get an ANTHROPIC_API_KEY
, we can create an account at Anthropic. Alternatively, we can get an OPENAI_API_KEY
from OpenAI.
The example also uses function calling from OpenAI, which allows us to call a function if a specific query is recognized. In this sample, we can ask, “What’s the weather like?” in a particular location. If you want to support this feature, you can get your API key from Open Weather (or any other service, but we would need to update the request in that case).
Next, we need to install the dependencies using the npm install
command.
After the setup is done, we can run the sample from the root with the following command:
1npm start
This will start listening to requests on localhost:3000
.
4. Backend Interaction APIs
Now, let’s get back to our app and write some utility code to help us interact with the newly spun-up backend.
To do this, we’ll introduce two more files:
http/api.ts
12345678910111213141516171819export const post = async (url: string, data: unknown = {}) => { try { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { throw new Error(`An HTTP Error has occurred. Status: ${response.status}`); } return await response.json(); } catch (error) { console.error('Error:', error); } };
http/requests.ts
123456import { post } from './api.ts'; export const startAI = async (channelId: string) => post('http://localhost:3000/start-ai-agent', { channel_id: channelId }); export const stopAI = async (channelId: string) => post('http://localhost:3000/stop-ai-agent', { channel_id: channelId });
The code above should make initiating HTTP requests to our server much easier.
5. Implement AI Resolution Resolver
We have covered the interaction with the backend and will now switch our focus to building the UI. We want to let the SDK know how we resolve AI messages. Since the AI backend is currently set up to return the custom field ai_generated
together with our message, we will check for that. However, you can check for any additional fields here to determine whether the message is AI-generated or not.
This can be done using the isMessageAIGenerated
on the Chat
component we created earlier, like so:
12345678910111213141516171819202122// ... rest of the code export default () => { return ( <SafeAreaView style={{ flex: 1, backgroundColor: 'white' }}> <AppProvider> <GestureHandlerRootView style={{ flex: 1 }}> <OverlayProvider value={{ style: chatTheme }}> <Chat client={chatInstance} isMessageAIGenerated={(message: MessageType) => !!message.ai_generated} > <NavigationContainer> <NavigationStack /> </NavigationContainer> </Chat> </OverlayProvider> </GestureHandlerRootView> </AppProvider> </SafeAreaView> ); };
Now that we’ve done this, our SDK is aware of an AI-generated message, and we can proceed further!
6. Creating a Typing Indicator
With all the configuration code out of the way, let’s start building the UI and add some code to handle the AI typing indicator. This comes as an out-of-the-box component with some default UI that you can use called AITypingIndicatorView
. For the sake of this guide, let’s put it right above the our MessageList
component within App.tsx
:
123456789101112131415161718192021// ... rest of the code const ChannelScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelScreen'>; }> = () => { const { channel } = useAppContext(); if (!channel) { return null; } return ( <Channel channel={channel}> <MessageList /> <AITypingIndicatorView /> <MessageInput /> </Channel> ); }; // ... rest of the code
This will ensure a typing indicator appears whenever the AI state is set to Thinking
or Generating
. If we wish to create our own custom interpretation of this indicator, we can easily do so by using the useAIState
hook available in the SDK. It can be invoked as follows:
1const { aiState } = useAIState(channel);
and it will always return to the current AI state. Based on this, we can craft our custom solution and decide both how and when to display the typing indicator. Here’s an example of what that might look like:
12345678910111213141516171819202122const MyAITypingIndicatorView = ({ channel }: { channel: ChannelType }) => { const { aiState } = useAIState(channel); return aiState === AIStates.Generating || aiState === AIStates.Thinking ? ( <View style={{ width: 40, height: 40, borderRadius: 20, backgroundColor: '#6200EE', justifyContent: 'center', alignItems: 'center', position: 'absolute', bottom: 75, left: 15, }}> <Text style={{ color: '#FFFFFF', fontSize: 18, fontWeight: '500', }}>G</Text> </View> ) : null; }
which we can use in place of our generic AITypingIndicatorView
.
7. Add UI for Handling the AI
Now that we’ve created the UI components, let’s extend Stream Chat’s SwiftUI SDK to include these AI capabilities. The rendering of the Message
in a typewriter-like animation comes ingrained in the SDK for messages that have their ai_generated
property set to true
already. It will fully support markdown as well. The component that controls this is called StreamingMessageView
, and it’s overridden by the Channel
component. It works as a wrapper around MessageTextContainer
, which implements the typewriter animation, and it also has configurability on the speed of the animation (through letterInterval
and renderingLetterCount
). It also comes with a hook called useStreamingMessage
which provides us with the state of the typewriter animation out of the box in the event that we want to completely implement our own UI.
To use the StreamingMessageView
, we don’t need to do anything. If we wish to override its behavior on the other hand, we might want to do something like:
12345678910111213141516171819202122232425262728293031const MyStreamingMessageView = (props) => ( <View style={{ backgroundColor: 'red', padding: 10 }}> <StreamingMessageView {...props} /> </View> ); // or const MyStreamingMessageView = (props) => ( <MessageTextContainer {...props} /> ); // ... rest of the code const ChannelScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelScreen'>; }> = () => { const { channel } = useAppContext(); if (!channel) { return null; } return ( <Channel channel={channel} StreamingMessageView={MyStreamingMessageView}> <MessageList /> <AITypingIndicatorView /> <MessageInput /> </Channel> ); };
where the first definition would put a red border around ai_generated
messages, and the second one would get rid of the typewriter behavior of the message and treat it as a normal one.
For the purposes of this guide, we will leave it as it is.
As some additional UI, however, let’s add a button that would allow us to activate or deactivate the chatbot in a given channel. To do this, we’ll first introduce another utility hook called useWatchers.ts
:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455import { useCallback, useEffect, useState } from 'react'; import { Channel } from 'stream-chat'; export const useWatchers = ({ channel }: { channel: Channel }) => { const [watchers, setWatchers] = useState<string[] | undefined>(undefined); const [loading, setLoading] = useState(false); const [error, setError] = useState<Error | null>(null); const queryWatchers = useCallback(async () => { setLoading(true); setError(null); try { const result = await channel.query({ watchers: { limit: 5, offset: 0 } }); setWatchers(result?.watchers?.map((watcher) => watcher.id)); setLoading(false); return; } catch (err) { console.error('An error has occurred while querying watchers: ', err); setError(err as Error); } }, [channel]); useEffect(() => { queryWatchers(); }, [queryWatchers]); useEffect(() => { const watchingStartListener = channel.on('user.watching.start', (event) => { const userId = event?.user?.id; if (userId && userId.startsWith('ai-bot')) { setWatchers((prevWatchers) => [ userId, ...(prevWatchers || []).filter((watcherId) => watcherId !== userId), ]); } }); const watchingStopListener = channel.on('user.watching.stop', (event) => { const userId = event?.user?.id; if (userId && userId.startsWith('ai-bot')) { setWatchers((prevWatchers) => (prevWatchers || []).filter((watcherId) => watcherId !== userId), ); } }); return () => { watchingStartListener.unsubscribe(); watchingStopListener.unsubscribe(); }; }, [channel]); return { watchers, loading, error }; };
Which is responsible for getting the most up-to-date list of channel watchers.
Next, we can add the view that displays the button positioned in the top right corner above our MessageList
:
1234567891011121314151617181920212223242526272829303132333435363738394041424344const ControlAIButton = ({ channel }: { channel: ChannelType }) => { const channelId = channel.id; const { watchers, loading } = useWatchers({ channel }); const [isAIOn, setIsAIOn] = useState(false); useEffect(() => { if (watchers) { setIsAIOn(watchers.some((watcher) => watcher.startsWith('ai-bot'))); } }, [watchers]); const onPress = async () => { if (!channelId) { return; } const handler = () => (isAIOn ? stopAI(channelId) : startAI(channelId)); await handler(); }; return watchers && !loading ? ( <Pressable style={{ padding: 8, position: 'absolute', top: 18, right: 18, backgroundColor: '#D8BFD8', borderRadius: 8, shadowColor: '#000', shadowOffset: { width: 0, height: 4 }, shadowOpacity: 0.3, shadowRadius: 5, elevation: 5, }} onPress={onPress} > <Text style={{ fontSize: 16, fontWeight: '500' }}> {isAIOn ? 'Stop AI 🪄' : 'Start AI 🪄'} </Text> </Pressable> ) : null; };
And add it to the other components:
1234567891011121314151617181920// ... rest of the code const ChannelScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelScreen'>; }> = () => { const { channel } = useAppContext(); if (!channel) { return null; } return ( <Channel channel={channel}> <MessageList /> <ControlAIButton channel={channel} /> <AITypingIndicatorView /> <MessageInput /> </Channel> ); };
The button looks for our definition of the chatbot (in this case, a watcher beginning with the string ai-bot
) and updates its UI while allowing us to send the correct HTTP requests we defined earlier. It will allow our users to enable the AI only whenever needed and keep it disabled otherwise.
Finally, we want to support the possibility of stopping the generation of the message. This once again comes ingrained within the SDK in the form of the StopMessageStreamingButton
component, which is placed instead of the SendButton
one whenever the AI is in a Generating
state, so we don’t really need to do anything here either. However, if we want to override it, we can again pass it as a prop to the Channel
component, and it will be changed. If we want to put the button elsewhere (not within MessageInput
) we can also do that by passing StopMessageStreamingButton={null}
and once again relying on the useAIState
hook to determine how the button works.
So for example, we could decide to do something like:
1234567891011121314151617181920212223242526272829const MyStopGenerationButton = ({ channel }: { channel: ChannelType }) => { const { aiState } = useAIState(channel); return aiState === AIStates.Generating ? ( <View style={{ padding: 20, backgroundColor: '#D8BFD8' }}> <Text>Stop generation</Text> </View> ) : null; }; const ChannelScreen: React.FC<{ navigation: StackNavigationProp<NavigationParamsList, 'ChannelScreen'>; }> = () => { const { channel } = useAppContext(); if (!channel) { return null; } return ( <Channel channel={channel} StopMessageStreamingButton={null}> <MyStopGenerationButton channel={channel} /> <MessageList /> <ControlAIButton channel={channel} /> <AITypingIndicatorView /> <MessageInput /> </Channel> ); };
to make the button no longer display within MessageInput
but rather come as a banner above our MessageList
for ease of access.
And with that, we’re done! Now we can run the app, open a channel, start the AI agent and start asking it some questions.
Conclusion
In this tutorial, we have built an AI assistant bot that works mostly out of the box with the Stream Chat React Native SDK:
- We have shown how to use our AI components for message rendering of LLM responses, such as markdown, code, tables, etc
- We have shown how to create our server that will start and stop AI agents that will respond to user questions
- You have learned how to customize our React Native SDK to integrate these new AI features
If you want to learn more about our AI capabilities, head to our AI landing page. Additionally, check our React Native Docs to learn how you can provide more customizations to your chat apps. Get started by signing up for a free Stream account today.