Contributing#
jupyter-lsp
and jupyterlab-lsp
are open source software, and
all contributions conforming to good sense, good taste, and the
Jupyter Code of Conduct are welcome, and will be reviewed
by the contributors, time-permitting.
You can contribute to the project through:
creating language server specs
you can publish them yourself (it might be a single file)…
or advocate for adding your spec to the github repository and its various distributions
these are great first issues, as you might not need to know any python or javascript
proposing parts of the architecture that can be extended
improving documentation
tackling Big Issues from the future roadmap
improving testing
reviewing pull requests
Thank you for all your contributions :heart:
Provision the environment#
A development environment requires, at a minimum:
python >=3.8,<3.13.0a0
jupyterlab >=4.1.0,<5.0.0a0
nodejs >=18,!=19,!=21,<23
It is recommended to use a virtual environment (e.g. virtualenv
or conda env
)
for development.
conda#
To use the same environment as the binder demo (recommended), start with a
Mambaforge base
environment.
While the
conda
commands can be used below,mamba
provides both faster solves and better error messages.
mamba env update -p ./.venv --file binder/environment.yml # build, lint, unit test deps
source activate ./.venv # activate on POSIX
activate ./.venv # activate on Windows
Optionally extend your environment further for browser testing, and/or docs:
mamba env update -p ./.venv --file requirements/atest.yml # browser test deps
mamba env update -p ./.venv --file requirements/docs.yml # docs deps
pip#
pip
can be used to install most of the basic Python build and test dependencies:
pip install -r requirements/dev.txt # in a virtualenv, probably
nodejs
must be installed by other means,
with a Long Term Support version (even numbered) version recommended:
sudo apt-get install nodejs # ... on debian/ubuntu
sudo dnf install nodejs # ... on fedora/redhat
Single-step setup#
Once your environment is created and activated, you can run:
python3 binder/postBuild
This performs all the basic setup steps, and is used for the binder demo.
This approach may not always work. Continue reading for a step-by-step instructions which also show all the underlying pieces.
Manual installation#
Install jupyter-lsp
from source in your virtual environment:
python -m pip install -e python_packages/jupyter_lsp --ignore-installed --no-deps -vv
Enable the server extension:
jupyter server extension enable --sys-prefix --py jupyter_lsp
Install npm
dependencies, build TypeScript packages, and link
to JupyterLab for development:
jlpm bootstrap
# if you installed `jupyterlab_lsp` before uninstall it before running the next line
jupyter labextension develop python_packages/jupyterlab_lsp --overwrite
# optional, only needed for running a few tests for behaviour with missing language servers
jupyter labextension develop python_packages/klingon_ls_specification --overwrite
Note: on Windows you may need to enable Developer Mode first, as discussed in jupyterlab#9564
Frontend Development#
To rebuild the schemas, packages, and the JupyterLab app:
jlpm build
To watch the files and build continuously:
jlpm watch # leave this running...
Now after a change to TypesScript files, wait until both watchers finish compilation, and refresh JupyterLab in your browser.
Note: the backend schema is not included in
watch
, and is only refreshed bybuild
To check and fix code style:
jlpm lint
To run test the suite (after running jlpm build
or watch
):
jlpm test
To run tests matching specific phrase, forward -t
argument over yarn and lerna to the test runners with two --
:
jlpm test -- -- -t match_phrase
To verify the webpack build wouldn’t include problematic vendored dependencies:
python scripts/distcheck.py
Server Development#
Testing jupyter-lsp
#
python scripts/utest.py
Documentation#
To build the documentation:
python scripts/docs.py
To watch documentation sources and build continuously:
python scripts/docs.py --watch
To check internal links in the docs after building:
python scripts/docs.py --check --local-only
To check internal and external links in the docs after building:
python scripts/docs.py --check
Note: you may get spurious failures due to rate limiting, especially in CI, but it’s good to test locally
Browser-based Acceptance Tests#
The browser tests will launch JupyterLab on a random port and exercise the
Language Server features with Robot Framework and SeleniumLibrary. It
is recommended to peruse the Robot Framework User’s Guide (and the existing
.robot
files in atest
) before working on tests in anger.
First, ensure you’ve prepared JupyterLab for jupyterlab-lsp
frontend and server development.
Prepare the environment:
mamba env update -n jupyterlab-lsp --file requirements/atest.yml
or with pip
pip install -r requirements/atest.txt # ... and install geckodriver, somehow
sudo apt-get install firefox-geckodriver # ... e.g. on debian/ubuntu
Run the tests:
python scripts/atest.py
The Robot Framework reports and screenshots will be in
build/reports/{os}_{py}/atest/{attempt}
, with (log|report).html
and subsequent
captured screenshots
being the most interesting artifact, e.g.
build/
reports/
linux_310/
atest/
1/
log.html
report.html
screenshots/
Customizing the Acceptance Test Run#
By default, all of the tests will be run, once.
The underlying robot
command supports a vast number of options and many
support wildcards (*
and ?
) and boolean operators (NOT
, OR
). For more,
start with
simple patterns.
Find robot options#
robot --help
Run a suite#
python scripts/atest.py --suite "05_Features.Completion"
Run a single test#
python scripts/atest.py --test "Works When Kernel Is Idle"
Run test with a tag#
Tags are preferable to file names and test name matching in many settings, as they are aggregated nicely between runs.
python scripts/atest.py --include feature:completion
… or only Python completion
python scripts/atest.py --include feature:completionANDlanguage:python
Just Keep Testing with ATEST_RETRIES
#
Run tests, and rerun only failed tests up to two times:
ATEST_RETRIES=2 python scripts/atest.py --include feature:completion
After running a bunch of tests, it may be helpful to combine them back together
into a single log.html
and report.html
with
rebot.
Like atest.py
, combine.py
also passes through extra arguments
python scripts/combine.py
Troubleshooting#
If you see the following error message:
Parent suite setup failed: TypeError: expected str, bytes or os.PathLike object, not NoneType
it may indicate that you have no
firefox
, orgeckodriver
installed (or discoverable in the search path).If a test suite for a specific language fails it may indicate that you have no appropriate server language installed (see LANGUAGESERVERS)
If you are seeing errors like
Element is blocked by .jp-Dialog
, caused by the JupyterLab Build suggested dialog, (likely if you have been usingjlpm watch
) ensure you have a “clean” lab (with production assets) with:jupyter lab clean jlpm build jlpm lab:link jupyter lab build --dev-build=False --minimize=True
and re-run the tests.
To display logs on the screenshots, configure the built-in
ILSPLogConsole
console, to use the'floating'
implementation.If you see:
SessionNotCreatedException: Message: Unable to find a matching set of capabilities
geckodriver >=0.27.0
requires an actual Firefox executable. Several places will be checked (including whereconda-forge
installs, as in CI): to test a Firefox not on yourPATH
, set the following environment variable:export FIREFOX_BINARY=/path/to/firefox # ... unix set FIREFOX_BINARY=C:\path\to\firefox.exe # ... windows
If you see
Element ... could not be scrolled into view
in theOpen Context Menu for File
step check if you have an alternative file browser installed (such asjupyterlab-unfold
) which might interfere with testing (it is recommended to run the tests in an separated environment)
Formatting#
You can clean up your code, and check for using the project’s style guide with:
python scripts/lint.py
Optionally, to fail on the first linter failure, provide --fail-fast
. Additional
arguments are treated as filters for the linters to run.
python scripts/lint.py --fail-fast py # or "js", "robot"
Specs#
While language servers can be configured by the user using a simple JSON or Python configuration file, it is preferable to provide users with an option that does not require manual configuration. The language server specifications (specs) wrap the configuration (as would be defined by the user) into a Python class or function that can be either:
distributed using PyPI/conda-forge and made conveniently available to users for
pip install
and/orconda install
contributed to the collection of built-in specs of jupyter-lsp by opening a PR (preferable for popular language servers, say >100 users)
In either case the detection of available specifications uses Python entry_points
(see the [options.entry_points]
section in jupyter-lsp setup.cfg).
If an advanced user installs, locates, and configures, their own language server it will always win vs an auto-configured one.
Writing a spec#
A spec is a Python callable (a function, or a class with __call__
method) that accepts a single argument, the
LanguageServerManager
instance, and returns a dictionary of the form:
{
"python-language-server": { # the name of the implementation
"version": SPEC_VERSION, # the version of the spec schema (an integer)
"argv": ["python", "-m", "pyls"], # a list of command line arguments
"languages": ["python"], # a list of languages it supports
"mime_types": ["text/python", "text/x-ipython"]
}
}
The above example is only intended as an illustration and not as an up-to-date guide.
For details on the dictionary contents, see the schema definition and built-in specs.
Basic concepts (meaning of the argv
and languages
arguments) are also explained in the configuration files documentation.
When contributing a specification we recommend to make use of the helper classes and other utilities that take care of the common use-cases:
ShellSpec
helps to create specs for servers that can be started from command-linePythonModuleSpec
is useful for servers which are Python modulesNodeModuleSpec
will take care of finding Node.js modules
See the built-in built-in specs for example implementations.
The spec should only be advertised if the command could actually be run:
its runtime/interpreter (e.g.
julia
,nodejs
,python
,r
,ruby
) is installedthe language server itself is installed (e.g.
python-language-server
)
otherwise an empty dictionary ({}
) should be returned.
Common Concerns#
some language servers need to have their connection mode specified
the
stdio
interface is the only one supported byjupyter_lsp
PRs welcome to support other modes!
many language servers use
nodejs
LanguageServerManager.nodejs
will provide the location of our best guess at where a user’snodejs
might be found
some language servers are hard to start purely from the command line
use a helper script to encapsulate some complexity, or
use a
command
argument of the interpreter is available (see the r spec and julia spec for examples)
Example: making a pip-installable cool-language-server
spec#
Consider the following (absolutely minimal) directory structure:
- setup.py
- jupyter_lsp_my_cool_language_server.py
You should consider adding a LICENSE, some documentation, etc.
Define your spec:
# jupyter_lsp_my_cool_language_server.py
from shutil import which
def cool(app):
cool_language_server = shutil.which("cool-language-server")
if not cool_language_server:
return {}
return {
"cool-language-server": {
"version": 1,
"argv": [cool_language_server],
"languages": ["cool"],
"mime_types": ["text/cool", "text/x-cool"]
}
}
Tell pip
how to package your spec:
# setup.py
import setuptools
setuptools.setup(
name="jupyter-lsp-my-cool-language-server",
py_modules=["jupyter_lsp_my_cool_language_server"],
entry_points={
"jupyter_lsp_spec_v1": [
"cool-language-server = jupyter_lsp_my_cool_language_server:cool"
]
}
)
Test it!
python -m pip install -e .
Build it!
python setup.py sdist bdist_wheel
Debugging#
To see more see more log messages navigate to Settings
❯ Settings Editor
❯ Language Servers
and adjust:
adjust
Logging console verbosity level
switch
Ask servers to send trace notifications
toverbose
toggle
Log all LSP communication with the LSP servers
For robot tests set:
Configure JupyterLab Plugin {"loggingConsole": "floating", "loggingLevel": "debug"}
Reporting#
The human- and machine-readable outputs of many of the above tasks can be combined
into a single output. This is used by CI to check overall code coverage across
all of the jobs, collecting and linking everything in build/reports/index.html
.
python scripts/report.py