Desktop appmacOS + WindowsVersion 2.0.2

How to use Interview Coder Plus

Install the desktop app, connect a model provider, test capture mode, then learn the shortcuts for real interview workflows.

Mac splits into ARM and Intel options on hover. Use ARM for Apple Silicon chips.

macOS install note

If macOS blocks the dmg

Terminal

Copy both lines for your Mac, paste them into Terminal, press Enter, then open the app again.

The second line may ask for your Mac password. Type it and press Enter; the password stays hidden while typing.

Apple Silicon Mac

M1 / M2 / M3 / M4

xattr -dr com.apple.quarantine ~/Downloads/Interview-Coder-arm64.dmg
sudo xattr -dr com.apple.quarantine /Applications/time.app

Intel Mac

Older Intel Macs

xattr -dr com.apple.quarantine ~/Downloads/Interview-Coder-x64.dmg
sudo xattr -dr com.apple.quarantine /Applications/time.app

Account

Install, sign in, and confirm access

Do this before you touch shortcuts. The app needs a logged-in account, active trial or subscription, and saved settings before AI features can run.

1

Install the app

Open the desktop build after downloading. Restart once if your OS asks for permissions.

2

Sign in

Use the same email you use on the website so the desktop app can read your access status.

3

Confirm access

Start the trial or confirm your paid subscription before testing screenshots and AI answers.

Settings

Choose one model provider

You only need one main model provider to start. Choose Gemini for the simplest screenshot workflow, or OpenRouter if you want one account that can route to many models. The cards below also show where to get the API key.

Easiest path for most users

1. Gemini

Paste one API key and keep the default model.

2. Deepgram

Add this only if you want voice transcription.

3. Full Screen

Use the simplest capture mode for the first test.

Gemini

Fast multimodal workflows with screenshots.

Gemini

In app settings

API keypaste the Gemini API key under Gemini
Model namekeep the default or change it later
  1. 1Open Google AI Studio.
  2. 2Create a Gemini API key.
  3. 3Copy the key and paste it under Gemini in the app.
  4. 4Keep default, or use another Gemini model.

Use a Gemini key with the Gemini provider.

If you change the model name, use the exact name from Gemini docs.

OpenRouter

Trying many models through one account.

OpenRouter

In app settings

API keypaste the OpenRouter API key under OpenRouter
Model namekeep the default or change it later
  1. 1Sign in to OpenRouter.
  2. 2Create an OpenRouter API key.
  3. 3Copy the key and paste it under OpenRouter in the app.
  4. 4Keep default, or paste an exact model ID.

Use an OpenRouter key with the OpenRouter provider.

If you change the model name, copy the exact model ID from OpenRouter.

Other model providers for later

OpenAI, Anthropic, Groq, and custom endpoints are optional. Use them after the first provider is already working.

Expand

OpenAI

General coding help and explanations.

Optional

API key: paste the OpenAI API key under OpenAI

Model name: keep the default or change it later

  1. 1Sign in to the OpenAI platform.
  2. 2Create an OpenAI API key.
  3. 3Copy the key and paste it under OpenAI in the app.

Anthropic

Reasoning-heavy explanations.

Optional

API key: paste the Anthropic API key under Anthropic

Model name: keep the default or change it later

  1. 1Sign in to the Anthropic Console.
  2. 2Create an Anthropic API key.
  3. 3Copy the key.

Groq through Custom

Very fast OpenAI-compatible inference.

Optional

Base URL: https://api.groq.com/openai/v1

API key: paste the Groq key under Custom

Model name: exact Groq model ID

  1. 1Create a Groq account and open API Keys.
  2. 2Create a Groq API key.
  3. 3Choose Custom 1 or Custom 2 in the app.

Any custom endpoint

OpenAI-compatible providers.

Optional

Base URL: Base URL

API key: API key

Model name: Model name

Custom prompt (optional): Custom prompt (optional)

  1. 1Confirm the provider is OpenAI-compatible.
  2. 2Copy the provider base URL, usually ending in /v1.
  3. 3Copy the API key and exact model ID.

Voice recognition

Set up voice after the model works

Voice recognition is a separate setting. The app first turns speech into transcript text, then sends that transcript through the same answer logic as screenshots and follow-ups.

Important order

First configure one AI model provider. Then add Deepgram for voice recognition. After that, click the Voice or Record button in the app to start transcription.

Deepgram

Recommended for speech recognition, voice follow-ups, and hands-free workflows.

Voice Recognition Settings

In app settings

API keyDeepgram API Key
Languagekeep multi-language unless you know the interview language
  1. 1Open the Deepgram Console and select a project.
  2. 2Create a new API key from the project keys page.
  3. 3Select Deepgram under Voice Recognition in the app.
  4. 4Paste the Deepgram key, choose a language, and save.

Deepgram is only for voice recognition, not the main coding model.

Do not paste a Deepgram key into the model provider fields.

API troubleshooting

Common setup issues

Most failed first runs come from a provider mismatch, a model name typo, or a custom Base URL that is missing the required path.

Invalid API key

Check that the key belongs to the selected provider, has no extra spaces, and has not been deleted or restricted.

Model not found

Use the exact model ID from the provider docs or model list. Model names are often case-sensitive.

Wrong Base URL

Custom providers usually need an OpenAI-compatible URL ending in /v1 or /openai/v1.

No credits or billing

Some providers require credits, billing, or project access before API calls work.

Voice key in the wrong field

Deepgram keys belong in Voice Recognition Settings, not the main model provider field.

Provider mismatch

If OpenRouter is selected, use an OpenRouter key. If Gemini is selected, use a Gemini key.

Capture

Choose what the app should read

Set screenshot mode after the model works, then test one capture before you rely on it in an interview.

Full screen

Best for normal coding platforms

Use this when the problem statement, editor, examples, and terminal are all relevant. It is the simplest mode for first-time setup.

Region

Best for focused capture

Use this when you only want the prompt, a failed test, or one part of the screen. Voice screenshot keywords follow the same mode.

Workflows

Use the app by situation

Once access, model, and capture mode are ready, use these flows during coding rounds, follow-ups, and debugging. Actions can be triggered with buttons, shortcuts, or configured voice triggers where available.

Voice transcript

  1. 1Start voice recognition in the app.
  2. 2Let the app turn speech into transcript text.
  3. 3Send the transcript directly for a spoken question.
  4. 4Send transcript plus screenshots when the screen matters.

Coding problem

  1. 1Take a screenshot of the problem statement.
  2. 2Add more screenshots if examples, constraints, or tests are on another screen.
  3. 3Generate the answer.

Debugging

  1. 1Stay on the Solution screen for the current problem.
  2. 2Take screenshots of your code, error message, failed test, or output.
  3. 3Generate another answer for a targeted fix.
  4. 4Debugging can be slower because the app needs to reason about the correct fix.

Follow-up question

  1. 1Voice follow-ups can use the current answer context.
  2. 2Send the recognized transcript directly for a spoken follow-up.
  3. 3If the follow-up depends on code, output, or the screen, take a screenshot too.
  4. 4Send transcript plus screenshots together for screen-based follow-ups.
  5. 5Reset first when switching to a new screenshot problem.

New problem

  1. 1For a new screenshot problem, reset the current problem context first.
  2. 2Take screenshots of the new problem statement.
  3. 3Generate the new answer.
  4. 4For voice-only follow-ups, keep sending transcripts without resetting.

Advanced

Controls, auto mode, and custom behavior

Voice smart triggers and auto mode can send recognized speech automatically. Custom prompts, custom models, knowledge files, and local records help with advanced workflows.

Voice smart triggers

Voice

Voice trigger words can start actions from recognized speech.

  • Answer triggers send the latest 4 recognized sentences for an answer.
  • Screenshot keywords can take screenshots by voice.
  • Edit answer triggers and screenshot keywords in Settings.

Auto mode

Voice

Auto mode listens for interviewer questions and can automatically send recognized questions.

  • Turn on voice recognition and Auto mode before the interview if you want automatic sending.
  • Shortcuts can still manually send the current transcript or screenshots while Auto mode is on.
  • It uses microphone permission and voice recognition settings.

Local interview records

History

Local interview records keep previous sessions on the device for review.

  • Find records below the Custom 1 and Custom 2 settings.
  • Review past questions, screenshots, transcripts, and answers when available.
  • Clear records when you no longer need them.

Custom prompts

Answer style

Custom 1 and Custom 2 providers support an optional prompt to control how the assistant responds.

  • Use it to prefer concise answers, step-by-step reasoning, a specific coding language, or interview-style explanations.
  • Keep prompts short enough to be reliable under pressure.
  • Save Settings after editing so the prompt is applied.

Custom models

Bring your own

Custom providers are for OpenAI-compatible endpoints where the user supplies the connection details.

  • Select Custom 1 or Custom 2.
  • Fill Base URL, API Key, and Model Name.
  • Use a URL like https://your-openai-compatible.example.com/v1 when your provider requires the /v1 path.
  • Use the exact model ID from your provider dashboard.

Local knowledge base

Context

The app also has a knowledge base option for imported text or markdown files when using custom workflows.

  • Import .txt or .md files.
  • Rebuild the index after changing files.
  • Enable the knowledge base only when those notes should influence answers.

Keyboard

Keyboard shortcuts

Shortcuts trigger the same core actions as the app buttons. On macOS use Cmd. On Windows use Ctrl.

Essential

ActionmacOSWindows
Show / hide appCmd + BCtrl + B
Take screenshotCmd + HCtrl + H
Process screenshotsCmd + EnterCtrl + Enter
Send voice transcriptCmd + SCtrl + S
Send transcript + screenshotCmd + DCtrl + D
Delete last screenshot / clear textCmd + LCtrl + L
Reset current problemCmd + RCtrl + R
Quit appCmd + QCtrl + Q

Window and reading

ActionmacOSWindows
Move windowCmd + Arrow KeysCtrl + Arrow Keys
Decrease opacityCmd + [Ctrl + [
Increase opacityCmd + ]Ctrl + ]
Zoom inCmd + =Ctrl + =
Zoom outCmd + -Ctrl + -
Reset zoomCmd + 0Ctrl + 0
Increase answer fontCmd + Shift + =Ctrl + Shift + =
Decrease answer fontCmd + Shift + -Ctrl + Shift + -
Reset answer fontCmd + Shift + 0Ctrl + Shift + 0
Scroll answerCmd + Shift + Up/DownCtrl + Shift + Up/Down
Switch answer pageCmd + , / Cmd + .Ctrl + , / Ctrl + .

Before the interview

Run a screen-share check

Do this once on the same computer, monitor setup, and meeting app you plan to use.

1

Open the app

2

Join a test meeting

3

Share your screen

4

Capture a sample prompt

5

Generate an answer

Troubleshooting

Common fixes

Most setup issues come from visibility toggles, permissions, provider settings, or shortcut conflicts.

I cannot see the app

Press Cmd/Ctrl + B to show or hide the app. If it still does not appear, restart the desktop app.

Screenshots are not working

Check screen recording permissions, then restart the desktop app after changing permissions.

The AI says the key or model is invalid

Open Settings and confirm the selected provider, API key, model name, and custom base URL if you use one.

The answer text is too small or too large

Use Cmd/Ctrl + Shift + = to increase answer font, Cmd/Ctrl + Shift + - to decrease it, or Cmd/Ctrl + Shift + 0 to reset.

The window is in the wrong place

Use Cmd/Ctrl + Arrow Keys to move it. Use Cmd/Ctrl + R only when you want to reset the current problem context.

A shortcut does not work

Another app may be using the same shortcut. Close the conflicting app or change its shortcut if possible.

Need the full product walkthrough?

Watch the setup video again before testing the desktop app.

Back to video