This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

NeuroDesk Copilot

NeuroDesk Copilot: How to use LLMs for code autocompletion, chat support in NeuroDesk ecosystem

1 - NeuroDesk Copilot using Github Copilot

NeuroDesk Copilot: How to use LLMs for code autocompletion, chat support in NeuroDesk ecosystem

Neurodesk Copilot: Using Github Copilot inside NeuroDesk Environment

This guide provides detailed instructions on how to set up and use GitHub Copilot in the NeuroDesk environment, enabling code autocompletion, real-time chat assistance, and code generation.

Step 1: Login with Github and follow the Instruction below:

  1. Make sure you have a GitHub account with a valid GitHub Copilot subscription or access. If you are an eligible student, teacher, or open-source maintainer, you can access GitHub Copilot Pro for free. See Getting free access to Copilot Pro as a student, teacher, or maintainer.
  2. Log in to NeuroDesk app or Neurodesktop using the GitHub single sign-on (SSO) option.
  3. Grant permission to GitHub Copilot when prompted to ensure Copilot can operate within your NeuroDesk environment.

Login with Github

Step 2: Use chat interface

  1. Open the Chat feature in NeuroDesk and type your query or command. Examples:
    • “Explain how to apply a Fourier Transform in NumPy.”
    • “Help me debug my data-loading function.”
  2. Press Enter. NeuroDesk Copilot will respond with explanations, tips, or suggested code.

Chat feature

Step 3: Code completion

  1. Begin typing your code within a cell in NeuroDesk. As you type, Copilot provides inline suggestions. You can accept suggestions by pressing Tab key.
  2. If the suggestion isn’t relevant, continue typing or press Escape to dismiss it.

Code completion

Step 4: Generate code

  1. When you need a larger block of code or a specific function, ask Copilot directly in the chat or as an inline comment. For example:
    • “Generate a Python function that reads EEG data from a CSV, cleans noise, and plots the channels.”
  2. Copilot will produce a snippet of code you can accept, edit, or reject entirely.

Generate Code

Configuring LLM Provider and models

You can configure the model provider and model options using the Notebook Intelligence Settings dialog. You can access this dialog from JupyterLab Settings menu -> Notebook Intelligence Settings, using /settings command in Copilot Chat or by using the command palette.

2 - NeuroDesk Copilot using Local LLMs

NeuroDesk Copilot: How to use LLMs for code autocompletion, chat support in NeuroDesk ecosystem

Neurodesk Copilot: Using Locally hosted LLMs inside Neurodesk Environment

Configuring LLM Provider and models

NeuroDesk Copilot allows you to harness the capabilities of local Large Language Models (LLMs) for code autocompletion and chat-based assistance, directly within your NeuroDesk environment. This guide demonstrates how to configure Ollama as your local LLM provider and get started with chat and inline code completion. You can configure the model provider and model options using the Notebook Intelligence Settings dialog. You can access this dialog from JupyterLab Settings menu -> Notebook Intelligence Settings, using /settings command in Copilot Chat or by using the command palette.

Step 1: Choose Ollama and Neurodesk copilot: type /settings in chat interface and choose Ollama and neurodesk model and save settings

Choose Jupyter-AI settings

Step 2: Use chat interface

  1. Open the Chat feature in NeuroDesk and type your query or command. Examples:
    • “Explain how to import MRI dataset in python.”
    • “Help me debug my data-loading function.”
  2. Press Enter. NeuroDesk Copilot will respond with explanations, tips, or suggested code.

Chat feature

Step 3: Code completion

  1. Begin typing your code within a cell in NeuroDesk. As you type, Copilot provides inline suggestions. You can accept suggestions by pressing Tab key.
  2. If the suggestion isn’t relevant, continue typing or press Escape to dismiss it.

Code completion

Feel free to update the settings to disable auto completer to manual invocation in Settings -> Settings Editor -> Inline Completer