Artifacts (Experimental)
Added in v0.6 (Released on 2025-01-17)
Artifacts are a way to work with substantial, self-contained content in ChatWise. They're especially useful for content you plan to modify, reuse, or reference multiple times, such as code projects, charts, or documents.
When to Use Artifacts
Artifacts work best for:
- Substantial content that you want to iterate on
- Previewing React components or rendering charts
For simple questions or brief examples, ChatWise will respond directly in the chat instead.
Getting Started
To enable artifacts for a chat:
- Look for the leaf icon in the chat input bar, or use CMD-K to find "Toggle Artifacts"
- Click it to toggle artifacts for the current chat
- Start your conversation, like "Generate a chart for the US GDP growth in the last 10 years" - ChatWise will automatically create artifacts when appropriate
Here's an example, we ask GPT-4o to generate a privacy policy for my AI-powered bookmarking app:
Artifact Types
- React Component
- HTML/SVG
- Charts (based on Recharts)
- Diagrams (based on Mermaid)
- Markdown Documents
- More in the future... (send feedback here)
React Component
LLM is good at coding, so you can ask it to generate React components for you, ChatWise also has the ability to preview the generated React component:
For now the React Component artifact only supports the following libraries:
shadcn-ui
lucide-react
react
tailwindcss
framer-motion
recharts
We can also preview your React component, this is done by writing the component in a temporary folder on your computer and use Bun to install necessary dependencies and bundle it into a static HTML file. This step is secure because it only involves trusted dependencies and Bun won't execute any scripts from these. If you don't have Bun installed on your computer, we automatically download Bun into ~/Library/Application Support/app.chatwise/bin
directory.
Cost
The system message for Artifacts support is around 3671 tokens, which costs around $0.009 USD for GPT-4o.
Model Support
You need at least GPT-4o mini, DeepSeek v3, or Llama 70B level models to use artifacts, it works best with models like GPT-4o and Claude 3.5 Sonnet. A context window of over 16K tokens is also better for artifacts since the system message itself is already 3.6K tokens.
Note that Gemini 2.0 Flash is not very reliable at the moment.
Known Issues
Model not responding with Artifacts
Prefer using verbs like "generate" or "create" instead of "write" when using artifacts, for example "generate a privacy policy for my AI chat app" instead of "write a privacy policy for my AI chat app". If the model still doesn't respond with Artifacts, try more explicit instructions like "generate ... using artifacts".