How to use Interview Coder Plus
Install the desktop app, connect a model provider, test capture mode, then learn the shortcuts for real interview workflows.
Mac splits into ARM and Intel options on hover. Use ARM for Apple Silicon chips.
macOS install note
If macOS blocks the dmg
Copy both lines for your Mac, paste them into Terminal, press Enter, then open the app again.
The second line may ask for your Mac password. Type it and press Enter; the password stays hidden while typing.
Apple Silicon Mac
M1 / M2 / M3 / M4
xattr -dr com.apple.quarantine ~/Downloads/Interview-Coder-arm64.dmg sudo xattr -dr com.apple.quarantine /Applications/time.app
Intel Mac
Older Intel Macs
xattr -dr com.apple.quarantine ~/Downloads/Interview-Coder-x64.dmg sudo xattr -dr com.apple.quarantine /Applications/time.app
Account
Install, sign in, and confirm access
Do this before you touch shortcuts. The app needs a logged-in account, active trial or subscription, and saved settings before AI features can run.
Install the app
Open the desktop build after downloading. Restart once if your OS asks for permissions.
Sign in
Use the same email you use on the website so the desktop app can read your access status.
Confirm access
Start the trial or confirm your paid subscription before testing screenshots and AI answers.
Settings
Choose one model provider
You only need one main model provider to start. Choose Gemini for the simplest screenshot workflow, or OpenRouter if you want one account that can route to many models. The cards below also show where to get the API key.
Easiest path for most users
Paste one API key and keep the default model.
Add this only if you want voice transcription.
Use the simplest capture mode for the first test.
Gemini
Fast multimodal workflows with screenshots.
In app settings
- 1Open Google AI Studio.
- 2Create a Gemini API key.
- 3Copy the key and paste it under Gemini in the app.
- 4Keep default, or use another Gemini model.
Use a Gemini key with the Gemini provider.
If you change the model name, use the exact name from Gemini docs.
OpenRouter
Trying many models through one account.
In app settings
- 1Sign in to OpenRouter.
- 2Create an OpenRouter API key.
- 3Copy the key and paste it under OpenRouter in the app.
- 4Keep default, or paste an exact model ID.
Use an OpenRouter key with the OpenRouter provider.
If you change the model name, copy the exact model ID from OpenRouter.
Other model providers for later
OpenAI, Anthropic, Groq, and custom endpoints are optional. Use them after the first provider is already working.
Expand
Other model providers for later
OpenAI, Anthropic, Groq, and custom endpoints are optional. Use them after the first provider is already working.
OpenAI
General coding help and explanations.
Optional
OpenAI
General coding help and explanations.
API key: paste the OpenAI API key under OpenAI
Model name: keep the default or change it later
- 1Sign in to the OpenAI platform.
- 2Create an OpenAI API key.
- 3Copy the key and paste it under OpenAI in the app.
Anthropic
Reasoning-heavy explanations.
Optional
Anthropic
Reasoning-heavy explanations.
API key: paste the Anthropic API key under Anthropic
Model name: keep the default or change it later
- 1Sign in to the Anthropic Console.
- 2Create an Anthropic API key.
- 3Copy the key.
Groq through Custom
Very fast OpenAI-compatible inference.
Optional
Groq through Custom
Very fast OpenAI-compatible inference.
Base URL: https://api.groq.com/openai/v1
API key: paste the Groq key under Custom
Model name: exact Groq model ID
- 1Create a Groq account and open API Keys.
- 2Create a Groq API key.
- 3Choose Custom 1 or Custom 2 in the app.
Any custom endpoint
OpenAI-compatible providers.
Optional
Any custom endpoint
OpenAI-compatible providers.
Base URL: Base URL
API key: API key
Model name: Model name
Custom prompt (optional): Custom prompt (optional)
- 1Confirm the provider is OpenAI-compatible.
- 2Copy the provider base URL, usually ending in /v1.
- 3Copy the API key and exact model ID.
Voice recognition
Set up voice after the model works
Voice recognition is a separate setting. The app first turns speech into transcript text, then sends that transcript through the same answer logic as screenshots and follow-ups.
Important order
First configure one AI model provider. Then add Deepgram for voice recognition. After that, click the Voice or Record button in the app to start transcription.
Deepgram
Recommended for speech recognition, voice follow-ups, and hands-free workflows.
In app settings
- 1Open the Deepgram Console and select a project.
- 2Create a new API key from the project keys page.
- 3Select Deepgram under Voice Recognition in the app.
- 4Paste the Deepgram key, choose a language, and save.
Deepgram is only for voice recognition, not the main coding model.
Do not paste a Deepgram key into the model provider fields.
API troubleshooting
Common setup issues
Most failed first runs come from a provider mismatch, a model name typo, or a custom Base URL that is missing the required path.
Invalid API key
Check that the key belongs to the selected provider, has no extra spaces, and has not been deleted or restricted.
Model not found
Use the exact model ID from the provider docs or model list. Model names are often case-sensitive.
Wrong Base URL
Custom providers usually need an OpenAI-compatible URL ending in /v1 or /openai/v1.
No credits or billing
Some providers require credits, billing, or project access before API calls work.
Voice key in the wrong field
Deepgram keys belong in Voice Recognition Settings, not the main model provider field.
Provider mismatch
If OpenRouter is selected, use an OpenRouter key. If Gemini is selected, use a Gemini key.
Capture
Choose what the app should read
Set screenshot mode after the model works, then test one capture before you rely on it in an interview.
Best for normal coding platforms
Use this when the problem statement, editor, examples, and terminal are all relevant. It is the simplest mode for first-time setup.
Best for focused capture
Use this when you only want the prompt, a failed test, or one part of the screen. Voice screenshot keywords follow the same mode.
Workflows
Use the app by situation
Once access, model, and capture mode are ready, use these flows during coding rounds, follow-ups, and debugging. Actions can be triggered with buttons, shortcuts, or configured voice triggers where available.
Voice transcript
- 1Start voice recognition in the app.
- 2Let the app turn speech into transcript text.
- 3Send the transcript directly for a spoken question.
- 4Send transcript plus screenshots when the screen matters.
Coding problem
- 1Take a screenshot of the problem statement.
- 2Add more screenshots if examples, constraints, or tests are on another screen.
- 3Generate the answer.
Debugging
- 1Stay on the Solution screen for the current problem.
- 2Take screenshots of your code, error message, failed test, or output.
- 3Generate another answer for a targeted fix.
- 4Debugging can be slower because the app needs to reason about the correct fix.
Follow-up question
- 1Voice follow-ups can use the current answer context.
- 2Send the recognized transcript directly for a spoken follow-up.
- 3If the follow-up depends on code, output, or the screen, take a screenshot too.
- 4Send transcript plus screenshots together for screen-based follow-ups.
- 5Reset first when switching to a new screenshot problem.
New problem
- 1For a new screenshot problem, reset the current problem context first.
- 2Take screenshots of the new problem statement.
- 3Generate the new answer.
- 4For voice-only follow-ups, keep sending transcripts without resetting.
Advanced
Controls, auto mode, and custom behavior
Voice smart triggers and auto mode can send recognized speech automatically. Custom prompts, custom models, knowledge files, and local records help with advanced workflows.
Voice smart triggers
VoiceVoice trigger words can start actions from recognized speech.
- Answer triggers send the latest 4 recognized sentences for an answer.
- Screenshot keywords can take screenshots by voice.
- Edit answer triggers and screenshot keywords in Settings.
Auto mode
VoiceAuto mode listens for interviewer questions and can automatically send recognized questions.
- Turn on voice recognition and Auto mode before the interview if you want automatic sending.
- Shortcuts can still manually send the current transcript or screenshots while Auto mode is on.
- It uses microphone permission and voice recognition settings.
Local interview records
HistoryLocal interview records keep previous sessions on the device for review.
- Find records below the Custom 1 and Custom 2 settings.
- Review past questions, screenshots, transcripts, and answers when available.
- Clear records when you no longer need them.
Custom prompts
Answer styleCustom 1 and Custom 2 providers support an optional prompt to control how the assistant responds.
- Use it to prefer concise answers, step-by-step reasoning, a specific coding language, or interview-style explanations.
- Keep prompts short enough to be reliable under pressure.
- Save Settings after editing so the prompt is applied.
Custom models
Bring your ownCustom providers are for OpenAI-compatible endpoints where the user supplies the connection details.
- Select Custom 1 or Custom 2.
- Fill Base URL, API Key, and Model Name.
- Use a URL like https://your-openai-compatible.example.com/v1 when your provider requires the /v1 path.
- Use the exact model ID from your provider dashboard.
Local knowledge base
ContextThe app also has a knowledge base option for imported text or markdown files when using custom workflows.
- Import .txt or .md files.
- Rebuild the index after changing files.
- Enable the knowledge base only when those notes should influence answers.
Keyboard
Keyboard shortcuts
Shortcuts trigger the same core actions as the app buttons. On macOS use Cmd. On Windows use Ctrl.
Essential
Window and reading
Before the interview
Run a screen-share check
Do this once on the same computer, monitor setup, and meeting app you plan to use.
Open the app
Join a test meeting
Share your screen
Capture a sample prompt
Generate an answer
Troubleshooting
Common fixes
Most setup issues come from visibility toggles, permissions, provider settings, or shortcut conflicts.
I cannot see the app
Press Cmd/Ctrl + B to show or hide the app. If it still does not appear, restart the desktop app.
Screenshots are not working
Check screen recording permissions, then restart the desktop app after changing permissions.
The AI says the key or model is invalid
Open Settings and confirm the selected provider, API key, model name, and custom base URL if you use one.
The answer text is too small or too large
Use Cmd/Ctrl + Shift + = to increase answer font, Cmd/Ctrl + Shift + - to decrease it, or Cmd/Ctrl + Shift + 0 to reset.
The window is in the wrong place
Use Cmd/Ctrl + Arrow Keys to move it. Use Cmd/Ctrl + R only when you want to reset the current problem context.
A shortcut does not work
Another app may be using the same shortcut. Close the conflicting app or change its shortcut if possible.
Need the full product walkthrough?
Watch the setup video again before testing the desktop app.