Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.
It shells out to mitmproxy with "--set", "ssl_insecure=true"
This took all of 5 minutes to find reading through main.py on my phone.
https://github.com/jmuncor/sherlock/blob/fb76605fabbda351828...
Edit: In case it’s not clear, you should not use this.
Since you asked: Not in a million years, no.
A bug of this type is either an honest typo or a sign that the author(s) don't take security seriously. Even if it were a typo, any serious author would've put a large FIXME right there when adding that line disabling verification. I know I would. In any case a huge red flag for a mitm tool.
Seeing that it's vibe coded leads me believe it's due to AI slop, not a simple typo from debugging.
No offense, but I wouldn’t trust anything else you published.
I think it’s great that you are learning and it is difficult to put yourself out there and publish code, but what you originally wrote had serious implications and could have caused real harm to users.
I know and trust mitmproxy. I'm warier and less likely to use a new, unknown tool that has such broad security/privacy implications. Especially these days with so many vibe-coded projects being released (no idea if that's the case here, but it's a concern I have nonetheless).
https://github.com/jmuncor/sherlock/blob/fb76605fabbda351828...
When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.).
This will help narrow down exactly which to still handle manually to best keep within token budgets.
Note: "yourusername" in install git clone instructions should be replaced.
especially, because once x0K words crossed, the output becomes worser.
https://github.com/quilrai/LLMWatcher
made this mac app for the same purpose. any thoughts would be appreciated
you are a professional (insert concise occupation).
Be terse.
Skip the summary.
Give me the nitty-gritty details.
You can send all that using your AI client settings.
If you build a DIY proxy you can also mess with the prompt on the wire. Cut out portions of the system prompt etc. Or redirect it to a different endpoint based on specific conditions etc.
I'm surprised that there isn't a stronger demand for enterprise-wide tools like this. Yes, there are a few solutions, but when you contrast the new standard of "give everyone at the company agentic AI capabilities" with the prior paradigm of strong data governance (at least at larger orgs), it's a stark difference.
I think we're not far from the pendulum swinging back a bit. Not just because AI can't be used for everything, but because the governance on widespread AI use (without severely limiting what tools can actually do) is a difficult and ongoing problem.
Mine uses an Envoy sidecar on a sandbox container.
It answered the question "what the heck is this software sending to the LLM" but that was about all it was good for.
E.g. if a request contains confidential information (whatever you define that to be), then block it?
https://clauderon.com/ -- not really ready for others to use it though
https://github.com/quilrai/LLMWatcher
here is my take on the same thing, but as a mac app and using BASE_URL for intercepting codex, claude code and hooks for cursor.
Yes.
I created something similar months ago [*] but using Envoy Proxy [1], mkcert [2], my own Go (golang) server, and Little Snitch [3]. It works quite well. I was the first person to notice that Codex CLI now sends telemetry to ab.chatgpt.com and other curiosities like that, but I never bothered to open-source my implementation because I know that anyone genuinely interested could easily replicate it in an afternoon with their favourite Agent CLI.
[1] https://www.envoyproxy.io/
[2] https://github.com/FiloSottile/mkcert
[3] https://www.obdev.at/products/littlesnitch/
[*] In reality, I created this something like 6 years ago, before LLMs were popular, originally as a way to inspect all outgoing HTTP(s) traffic from all the apps installed in my macOS system. Then, a few months ago, when I started using Codex CLI, I made some modifications to inspect Agent CLI calls too.
I've been intercepting its HTTP requests by running it inside a docker container with:
-e HTTP_PROXY=http://127.0.0.1:8080 -e HTTPS_PROXY=http://host.docker.internal:8080 -e NO_PROXY=localhost,127.0.0.1
It was working with mitmproxy for a very brief period, then the TLS handshake started failing and it kept requesting for re-authentication when proxied.
You can get the whole auth flow and initial conversation starters using Burp Suite and its certificate, but the Gemini chat responses fail in the CLI, which I understand is due to how Burp handles HTTP2 (you can see the valid responses inside Burp Suite).
But it's in the README:
Prompt you to install it in your system trust store
build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/sherlock