FAQ
- How do I see the logs of the extensions?
- I'm getting "TLS Error: unable to verify the first certificate" in the On-Prem Agent logs
- I'm getting "TLS Error: unable to verify the first certificate" in Swimm IDE Server logs in my IDE
- I don't know what certificates I need to mount into the On-Prem Agent
- I can't figure out what certificate to use in the On-Prem Agent to fix "TLS Error: unable to verify the first certificate"
- I'm seeing "Cannot read properties of null (reading 'versions')" in the On-Prem Agent logs /* TODO We should really have a better error for this, this is embarrassing */
- In the IDE logs (Swimm Server), I'm seeing "FetchError: request to https://… failed, reason: connect ETIMEDOUT"
- The AI generation seems to stop midway or just fails
- I'm using plain Docker, how do I check if the On-Prem Agent is running/stop it/see logs/etc?
- I'm using plain Docker, can I run commands in the container to help debug things?
- Testing Azure OpenAI with cURL
- I can access Azure OpenAI using cURL or code on my computer but can't inside a Docker Desktop container
How do I see the logs of the extensions?
The Swimm extension logs are available in:
- VS Code – Output panel → Swimm & Swimm Server
- IntelliJ – Help → Show/Open Log
- Visual Studio – Output panel → Swimm & Swimm Server
I'm getting "TLS Error: unable to verify the first certificate" in the On-Prem Agent logs
This error most often occurrs when the On-Prem Agent is deployed to access Azure OpenAI or other services through a forward/reverse proxy or VPN that performs TLS Man-in-the-Middle (MITM).
You will need to instruct the On-Prem Agent to trust the custom certificates
using the NODE_EXTRA_CA_CERTS
environment variable. (The custom certificates
bundle should be in PEM format)
For example using plain Docker, you can add to your command:
docker run \
... \
-v $PWD/ca.crt:/etc/ca.crt:ro
-e NODE_EXTRA_CA_CERTS=/etc/ca.crt \
... \
The path on the left hand side of -v
must be absolute, we use $PWD
to make it absolute.
If you don't know what the needed CA certificate is, you might be able to grab it from your operating system, see below.
I'm getting "TLS Error: unable to verify the first certificate" in Swimm IDE Server logs in my IDE
This error occurrs when the certificate used for your TLS setup for the On-Prem Agent is not trusted by your local computer.
Swimm's extensions know to load additional certificates from your operating
certificate store and also from NODE_EXTRA_CA_CERTS
if set in the environment
of your IDE.
You can enable "Network Debug" in the Swimm extensions configuration to receive some additional output about this in the Swimm extension and IDE server logs.
I don't know what certificates I need to mount into the On-Prem Agent
In case you are getting "TLS Error: unable to verify the first certificate" and don't know what certificates you need, you might be able to figure grab the needed certificate from your local computer/operating system certificate store (Since your IT will likely have configured your computer with the needed enterprise certificate).
Run the following command against the host that the On-Prem Agent is failing to connect to (e.g. Your Azure OpenAI deployment URL):
echo | openssl s_client -showcerts -connect foobar.openai.azure.com:443
The first certificate in the output is the CA certificate that you need:
Connecting to 34.144.198.131
CONNECTED(00000005)
depth=2 C=US, O=Google Trust Services LLC, CN=GTS Root R1
verify return:1
depth=1 C=US, O=Google Trust Services, CN=WR3
verify return:1
depth=0 CN=...
verify return:1
---
Certificate chain
You will then need to open certmgr.msc
on Windows, or "Keychain Access" on
macOS, and look for a certificate in one of the stores/keychains that matches
the name of the certificate you found. Export that certificate in PEM format,
and you can then follow the instructions to configure the
On-Prem Agent with it.
I can't figure out what certificate to use in the On-Prem Agent to fix "TLS Error: unable to verify the first certificate"
As a last resort, which is a bit insecure, but can be used for getting things
going until someone can figure out what the right certificate to pass in is, you
can use the NODE_TLS_REJECT_UNAUTHORIZED=0
environment variable to disable TLS
certificate verificication in the On-Prem Agent::
docker run \
... \
-e NODE_TLS_REJECT_UNAUTHORIZED=0 \
...
I'm seeing "Cannot read properties of null (reading 'versions')" in the On-Prem Agent logs
This means that your configuration file is malformed or not mounted correctly to the container. Check that:
- The configuration file is formatted correctly and matches the structure in the Installation Guide.
- Ensure that you mounted that configuration file correctly into the container.
You can use
docker exec -it <container_id> bash
to open a shell inside the container and check the file manually.
In the IDE logs (Swimm Server), I'm seeing "FetchError: request to https://… failed, reason: connect ETIMEDOUT"
This error occurrs when your local computer fails to connect to your On-Prem Agent deployment. A few things to check:
-
Is the On-Prem Agent correctly exposed to your local computer? Is the configuration to expose it on your container orchestrator correct? Is there any firewall in the way?
-
Make sure your proxy settings are correct, in case the On-Prem Agent is deployed in your local network and your network uses a proxy, make sure that the
no_proxy
/NO_PROXY
environment variable is set correctly. For example, in your profile file:export no_proxy=.company.internal
If you are having trouble setting this, a setting is also available in VS Code called
swimm.noProxy
and in IntelliJ's proxy settings to set this up instead of using the environment variables.
The AI generation seems to stop midway or just fails
It might be that there is some networking issue between your computer and the On-Prem Agent or from the On-Prem Agent to Azure OpenAI.
One common issue is if the load balancer that allows access to the On-Prem Agent has a short timeout and doesn't take SSE into account, you might have to increase the timeout in the load balancer using a vendor specific mechanism.
It could also be that you have some troublesome load balancer/proxy that does not support HTTP streaming/Chunked Encoding/SSE or needs to be configured for it. For example, disabling buffering in Nginx.
I'm using plain Docker, how do I check if the On-Prem Agent is running/stop it/see logs/etc?
To list running containers you use:
docker ps
To stop a running container:
docker stop <container id or name>
# Or to also remove it:
docker rm -f <container id or name>
You can view the logs of a running container using:
docker logs <container id or name>
# Or to see the logs in real time:
docker logs -f <container id or name>
Normally we run the container in the background using -d
, if you don't supply
-d
, it will start in the foreground, and without the ability to receive input
so you won't be able to kill it without using docker stop
, pass -i
if you
want to run it in the foreground and be able to kill it with Ctrl-C.
I'm using plain Docker, can I run commands in the container to help debug things?
You can run commands inside the container by using:
docker exec -it <container id or name> bash
This will give you an interactive shell into the container where you can run available commands to debug any issue you might be having.
Testing Azure OpenAI with cURL
You can test your communication with Azure OpenAI using cURL (Bash/Zsh or PowerShell):
curl -v "<deployment_url>/chat/completions?api-version=2024-02-01" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <api_key>" \
-d '{"messages": [{"role": "user", "content": "Say Hello, World!"}]}'
Or for cmd on Windows:
curl -v "<deployment_url>/chat/completions?api-version=2024-02-01" ^
-H "Content-Type: application/json" ^
-H "Authorization: Bearer <api_key>" ^
-d '{""messages"": [{""role"": ""user"", ""content"": ""Say Hello, World!""}]}'
It is also possible to try and use the Node.js OpenAI client from Swimm's On-Prem Agent container:
docker exec -it <container id or name> node
# And then in the Node.js REPL:
openai = require('openai')
client = new openai.AzureOpenAI({ apiKey: '', endpoint: '', deployment: '', apiVersion: '2024-10-21' })
await client.chat.completions.create({ messages: [{ role: 'user', content: 'Say "Hello, World!' }] })
I can access Azure OpenAI using cURL or code on my computer but can't inside a Docker Desktop container
Docker Desktop (The official Docker distribution with GUI) uses a virtual machine/WSL2 to run the container, as part of that it does some fancy stuff to give containers access to the network.
- It does a NAT, which makes traffic look like it comes from the Docker Desktop process. So make sure it is allowed to connect where you need it to in any local firewall.
- It catches any HTTP/HTTPS and proxies it to go via the HTTP proxy server of the host. So if you can connect to the destination on your host without the proxy using a tool like cURL, you might need to disable the proxy in Docker Desktop for that, you can specify custom proxy settings for Docker Desktop under Settings -> Resources -> Proxies.