In this tutorial, we will demonstrate how easy it is to build an assistant integrated into Stream’s React Chat SDK and learn how to incorporate the interaction on both the client and server sides. We will use the Anthropic and OpenAI APIs as the out-of-the-box examples, but using this method, developers can integrate any LLM service with Stream Chat and benefit from all of the same features like generation indicators, markdown support, tables, etc. Stream offers a free Maker Plan for side projects and small businesses, making it accessible for innovative builds at any scale.
Talk is cheap, so here’s a video of the result:
We will use our new UI components for AI to render messages as they come, with animations similar to those of popular LLMs, such as ChatGPT. Our UI components can render LLM responses that contain markdown, code, tables, and much more.
We also provide UI for thinking indicators that can react to the new AI-related events on the server side.
The entire code can also be found here.
1. Project Setup
First, we will create and set up the React project with the Stream Chat SDK. We'll use Vite with the Typescript template:
123npm create vite chat-example -- --template react-ts cd chat-example npm i stream-chat stream-chat-react
To have the AI components available, we need to ensure that we’re using the latest version of the stream-chat-react
package (at least 12.7.0
).
Next, we jump into the project and open up App.tsx
. First, we must initialize the Stream React Chat SDK using the useCreateChatClient
hook. We provide the necessary credentials for this tutorial, but you can create your own project on the dashboard.
Once the initialization is done, we can initialize the UI with the components we’re getting from the SDK.
Here’s the code for App.tsx
:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950import 'stream-chat-react/dist/css/v2/index.css'; // your Stream app information const apiKey = 'zcgvnykxsfm8'; const userId = 'anakin_skywalker'; const userName = 'Anakin Skywalker'; const userToken = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiYW5ha2luX3NreXdhbGtlciJ9.ZwCV1qPrSAsie7-0n61JQrSEDbp6fcMgVh4V2CB0kM8'; const user: User = { id: userId, name: userName, image: 'https://vignette.wikia.nocookie.net/starwars/images/6/6f/Anakin_Skywalker_RotS.png', }; const sort: ChannelSort = { last_message_at: -1 }; const filters: ChannelFilters = { type: 'messaging', members: { $in: [userId] }, }; const options: ChannelOptions = { limit: 10, }; const App = () => { const client = useCreateChatClient({ apiKey, tokenOrProvider: userToken, userData: user, }); if (!client) return <div>Setting up client & connection...</div>; return ( <Chat client={client}> <ChannelList filters={filters} sort={sort} options={options} /> <Channel> <Window> <ChannelHeader /> <MessageList /> <MessageInput /> </Window> <Thread /> </Channel> </Chat> ); }; export default App;
This will not look great out of the box since the default project comes with some CSS preconfigured. We fix this by replacing the code inside index.css
with this:
123456789101112131415161718192021html, body, #root { height: 100%; } body { margin: 0; } #root { display: flex; } .str-chat__channel-list { width: 30%; } .str-chat__channel { width: 100%; } .str-chat__thread { width: 45%; }
With that done, we can run the app:
1npm run dev
And now we can visit localhost:5173 to see a basic chat setup.
2. Running the Backend
Before adding AI features to our iOS app, let’s set up our node.js backend. The backend will expose two methods for starting and stopping an AI agent for a particular channel. If the agent is started, it listens to all new messages and sends them to OpenAI. It provides the results by sending a message and updating its text.
We use the Anthropic API and the new Assistants API from OpenAI in this sample. We also have an example of function calling. By default, Anthropic is selected, but we can pass openai
as a platform
parameter in the start-ai-agent
request if we want to use OpenAI.
The sample also supports sending different states of the typing indicator (for example, Thinking
, Checking external sources
, etc).
To run the server locally, we need to clone it:
1git clone https://github.com/GetStream/ai-assistant-nodejs.git your_local_location
Next, we need to set up our .env
file with the following keys:
12345ANTHROPIC_API_KEY=insert_your_key STREAM_API_KEY=insert_your_key STREAM_API_SECRET=insert_your_secret OPENAI_API_KEY=insert_your_key OPENWEATHER_API_KEY=insert_your_key
The STREAM_API_KEY
and STREAM_API_SECRET
can be found in our app's dashboard. To get an ANTHROPIC_API_KEY
, we can create an account at Anthropic. Alternatively, we can get an OPENAI_API_KEY
from OpenAI.
The example also uses function calling from OpenAI, which allows us to call a function if a specific query is recognized. In this sample, we can ask, “What’s the weather like?” in a particular location. If you want to support this feature, you can get your API key from Open Weather (or any other service, but we would need to update the request in that case).
Next, we need to install the dependencies using the npm install
command.
After the setup is done, we can run the sample from the root with the following command:
1npm start
This will start listening to requests on localhost:3000
.
3. Adding the AI to the Channel
We will add a button in the channel header to add and remove the AI. However, we still need to determine whether or not we already have it present to know which option to present to the user.
We will use a concept called watchers. We’ll create a custom hook to determine if - for a given channel - a user is watching the channel that starts with ai-bot
(Note that we are calling the AI that communicates the channel with the pattern ai-bot-
followed by the channel id).
Inside the hook, we listen to the two events user.watching.start
and user.watching.stop
to keep our information up-to-date. Then, we update the watchers accordingly and can consume that information wherever we want.
We create a new file called useWatchers.tsx
and fill it with this code:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051import { useCallback, useEffect, useState } from 'react'; import { Channel } from 'stream-chat'; export const useWatchers = ({ channel }: { channel: Channel }) => { const [watchers, setWatchers] = useState<string[]>([]); const [error, setError] = useState<Error | null>(null); const queryWatchers = useCallback(async () => { setError(null); try { const result = await channel.query({ watchers: { limit: 5, offset: 0 } }); setWatchers(result?.watchers?.map((watcher) => watcher.id)); return; } catch (err) { setError(err as Error); } }, [channel]); useEffect(() => { queryWatchers(); }, [queryWatchers]); useEffect(() => { const watchingStartListener = channel.on('user.watching.start', (event) => { const userId = event?.user?.id; if (userId && userId.startsWith('ai-bot')) { setWatchers((prevWatchers) => [ userId, ...(prevWatchers || []).filter((watcherId) => watcherId !== userId), ]); } }); const watchingStopListener = channel.on('user.watching.stop', (event) => { const userId = event?.user?.id; if (userId && userId.startsWith('ai-bot')) { setWatchers((prevWatchers) => (prevWatchers || []).filter((watcherId) => watcherId !== userId) ); } }); return () => { watchingStartListener.unsubscribe(); watchingStopListener.unsubscribe(); }; }, [channel]); return { watchers, error }; };
With that, we can create a new channel header component. Inside, we use the useChannelStateContext
hook to retrieve the channel
and initialize the newly created useWatchers
hook. With the watchers’ information, we create a variable called aiInChannel
and show text accordingly.
Also, depending on that variable, we call the start-ai-agent
or the stop-ai-agent
endpoint of the node.js backend we are running.
We create a new file called MyChannelHeader.tsx
and fill it with this:
1234567891011121314151617181920212223242526272829import { useChannelStateContext } from 'stream-chat-react'; import { useWatchers } from './useWatchers'; export default function MyChannelHeader() { const { channel } = useChannelStateContext(); const { watchers } = useWatchers({ channel }); const aiInChannel = (watchers ?? []).filter((watcher) => watcher.includes('ai-bot')).length > 0; return ( <div className='my-channel-header'> <h2>{channel?.data?.name ?? 'Chat with an AI'}</h2> <button onClick={addOrRemoveAgent}> {aiInChannel ? 'Remove AI' : 'Add AI'} </button> </div> ); async function addOrRemoveAgent() { if (!channel) return; const endpoint = aiInChannel ? 'stop-ai-agent' : 'start-ai-agent'; await fetch(`http://127.0.0.1:3000/${endpoint}`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ channel_id: channel.id }), }); } }
We want to have some basic styling for it. For that, we open up index.css
and paste the following code at the bottom of the file:
1234567891011121314151617.my-channel-header { display: flex; align-items: center; justify-content: space-between; font: normal; padding: 0rem 1rem; border-bottom: 1px solid lightgray; } .my-channel-header button { background: #005fff; color: white; padding: 0.5rem 1rem; border-radius: 0.5rem; border: none; cursor: pointer; }
The only thing remaining is to replace the original channel header with our custom one. We open App.tsx
and replace the <ChannelHeader>
component with our <MyChannelHeader>
.
With that, we can add and remove an AI agent for the current channel, and the UI in the channel header updates accordingly.
4. Displaying Streamed Messages from the AI
Now that we have the option to add the AI to the channel, we can start asking questions. However, we still need to display the messages, react to the streaming responses from the LLM, and render the Markdown (e.g., code samples, tables, etc.) that we get correctly.
Well, we have some good news for you. This is all baked into the SDK. We do not have to have any additional setup; it’s working out of the box. Markdown will be rendered, we’ll have a streaming response animating in as the LLM is generating.
However, in some cases, we might want to customize that, and for that, we offer some convenience functionality that we’ll quickly go over now.
At the core of that is the useMessageTextStreaming
hook. It takes three parameters as an input:
streamingLetterIntervalMs
(defaults to30
) - The timeout between each typing animation in milliseconds.renderingLetterCount
(defaults to2
) - The number of letters to be rendered each time we update.text
- The text that we want to render in a typewriter fashion.
We can use this to manually determine the parameters of the Streaming response and use that in a custom component to render the stream of the LLM answer as it comes in.
To demonstrate how to use this, we create a new file called MyMessage.tsx
and add the following code:
1234567891011import { useMessageContext, useMessageTextStreaming } from 'stream-chat-react'; export default function MyMessage() { const { message } = useMessageContext(); const { streamedMessageText } = useMessageTextStreaming({ renderingLetterCount: 10, streamingLetterIntervalMs: 50, text: message.text ?? '', }); return <p className='my-message'>{streamedMessageText}</p>; }
We add some basic styling to index.css
:
123456.my-message { border: 1px solid lightgray; padding: 1rem 1.5rem; margin: 0.5rem 1rem; border-radius: 0.5rem; }
And then inside App.tsx
we can customize the SDK to render our custom MyMessage
component instead of the built-in one by handing it to the Channel
component as the Message
parameter:
1<Channel Message={MyMessage}>
With that, we will render the message with a typewriter animation that we can customize using the parameters of the useMessageTextStreaming
hook.
However, we have a problem here: we’re not rendering Markdown. Luckily, we also offer a specialized component called StreamedMessageText
. This takes two parameters: the message
and a renderText
function. It uses the useMessageTextStreaming
hook under the hood (see the code here) to provide the streaming functionality.
We can use this inside our MyMessage
component like this:
1234export default function MyMessage() { const { message, renderText } = useMessageContext(); return <StreamedMessageText message={message} renderText={renderText}/> }
We could further customize this if we have special requirements by providing our renderText
functionality (see the documentation for more details).
This shows that we offer different layers of customization inside the SDK. If we want basic Markdown rendering with a streaming animation, we can use the built-in solution without any customization, and it will work flawlessly.
5. Adding an Indicator for the AI’s state
As for the message rendering, we also get a typing indicator for the AI. It will show different states of the process, like Thinking...
, or Checking external resources...
.
It is displayed with a fundamental UI that we can also customize. Just like before, we are provided with a hook, this one is called useAIState
. It will return an object of type AIState
, which indicates different states.
We can create a custom component with a custom UI for that. We make a new file called MyAIStateIndicator.tsx
and fill it with this code:
123456789101112131415161718192021222324import { AIState } from 'stream-chat'; import { useAIState } from 'stream-chat-react'; export default function MyAIStateIndicator() { const { channel } = useChannelStateContext(); const { aiState } = useAIState(channel); const text = textForState(aiState); return text && <p className='my-ai-state-indicator'>{text}</p>; function textForState(aiState: AIState): string { switch (aiState) { case 'AI_STATE_ERROR': return 'Something went wrong...'; case 'AI_STATE_CHECKING_SOURCES': return 'Checking external resources...'; case 'AI_STATE_THINKING': return "I'm currently thinking..."; case 'AI_STATE_GENERATING': return 'Generating an answer for you...'; default: return ''; } } }
We can add basic CSS to the index.css
file for this as well:
12345678.my-ai-state-indicator { background: #005fff; color: white; padding: 0.5rem 1rem; border-radius: 0.5rem; border: 1px solid #003899; margin: 1rem; }
The last thing is to find a place in the UI to add this component. One place to do it is to go to App.tsx
and add it inside the Channel
component like this:
123456789<Channel> <Window> <MyChannelHeader /> <MessageList /> <MyAIStateIndicator /> <MessageInput /> </Window> <Thread /> </Channel>
This demonstrates how to use a custom component to indicate the AI state, but we can use the built-in one. For this, we could replace the MyAIStateIndicator
we just added to the Channel
with the built-in AIStateIndicator
from the stream-chat-react
package.
Conclusion
In this tutorial, we have built an AI assistant bot that works seamlessly with Stream’s React Chat SDK:
- We have shown how to use our AI components for message rendering of LLM responses, such as markdown, code, tables, etc.
- We have shown how to create our server that will start and stop AI agents that will respond to user questions.
- We have learned how to use the built-in components from the React SDK and customize them to integrate these new AI features.
If you want to learn more about our AI capabilities, head to our AI solutions page. Additionally, check our React docs to learn how you can provide more customizations to your chat apps.