Rendered at 09:54:07 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
cyberjerkXX 2 days ago [-]
There was a security engineer at my work doing something similar to this. He wanted to use LLMs as an IDS. I begged him to use BPF and stop wasting sprint cycles trying to reinvent a shittier slower wheel.
pfortuny 2 days ago [-]
An elliptical wheel, at most. A square one without an axle, most probably.
jagged-chisel 2 days ago [-]
When I first saw this comment, it was downvoted into gray. But I can't imagine why. Apropos, and likely pretty accurate.
whizzter 2 days ago [-]
For reference Adam Dunkels is the developer of lwIP and uIP ip-stacks , as well as the C64 "Contiki" OS that used the latter to do networking.
amelius 2 days ago [-]
How fast can Claude do branch predictions in CPU?
nxobject 2 days ago [-]
Value prediction's where the most gains are to be had. Of course, the irony of using an LLM to execute an LLM is delicious.
andai 2 days ago [-]
At the end the article links this gem, ping via avian carriers:
Wouldn't this be faster with an agent skill that has code?
/skill-creator [or /create-skill] Write an agent skill
with code script(s) that use an existing user space IP library that works with your agent runtime, to [...]
Even faster would just to be use code in the first place!
westurner 7 hours ago [-]
But why would you use tokens instead of using the LLM to call code for this?
The minimum overhead to doing it with the LLM is the useful question
ValdikSS 2 days ago [-]
That's why LLM will eventually be used only for initial interaction between the user in their language, to prepare the data to a specialized model.
Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".
FeepingCreature 2 days ago [-]
That's actually how vision language models already work, pretty much.
wongarsu 2 days ago [-]
And there's a reason nobody uses them for face recognition
Vision language models are an incredible achievement in the generality and usability. But they pay a hefty price in fidelity and speed
stingraycharles 2 days ago [-]
Huh? The images are tokenized in the same way language is and it’s just fed into one single model. Not multiple smaller expert models.
Image gets rasterized into smaller pieces (eg 4x4 pixels) and each of those is assigned a token, similarly how text is broken up into tokens. And the whole thing is fed into a single model.
FeepingCreature 2 days ago [-]
Yes I'm saying
> Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".
that's p much how it works.
stingraycharles 2 days ago [-]
But that isn’t a specialized model like the grandparent claimed, but rather a single, multi-modal model.
Dylan16807 2 days ago [-]
Yes, the "imagine" was showcasing the opposite of a specialized model to call it a bad idea.
stingraycharles 2 days ago [-]
Do you know that MoE is a thing?
jampekka 2 days ago [-]
The experts in MoEs aren't specialized in any meaningful task sense. From level of what we would think as tasks MoEs are selected essentially arbitrarily per token and per block.
stingraycharles 2 days ago [-]
It’s unsupervised, yes, but “unspecialized in any meaningful task sense” is incorrect, that’s the whole point. It’s just not in the sense of “this is a legal expert, this is a software developer”.
orbital-decay 2 days ago [-]
Optimal expert separation depends on the goal and can be pretty arbitrary, for example DeepSeek v4 separates them more or less by domain if I remember correctly.
fouc 2 days ago [-]
think about how much faster it would've been with a small local model!
2 days ago [-]
mintflow 2 days ago [-]
This is cool, let aside the token usage, perhaps it can help analyze tcp throughput by redirect wire shark/to dump result
fl7305 2 days ago [-]
Opus 4.6 is already very good at troubleshooting all kinds of network problems if it has access to the command line tshark tool and the pcap files.
pram 2 days ago [-]
Agreed it’s pretty pro at deciphering logs, it figured out some weird NAT thing for me.
twoodfin 2 days ago [-]
Modulo Anthropic messing with the model for load mitigation, I wonder how stable this result is.
1,000 pings, how many correctly ponged?
mghackerlady 2 days ago [-]
is pong an actual term? If so I might've found a CS term better than wyde (2 bytes)
technothrasher 2 days ago [-]
I've heard people use that a lot, but the original metaphor was sonar not table tennis. So it is more appropriately an echo reply (which is what the ICMP return packet is called in the RFC).
johnwalkr 22 hours ago [-]
I'll give you another one: In Japanese there is the (unserious) term "enbug" which is the opposite of debug.
coldcity_again 2 days ago [-]
Yes, very much so. A ping is a request, a pong a response.
dymk 2 days ago [-]
2 bytes is a short…
mghackerlady 2 days ago [-]
2 bytes is a wyde. Because, to quote knuth, "two bytes makes one wyde"
ShinyLeftPad 2 days ago [-]
How quickly claude responds when it acts like a user space LLM chatbot?
baq 2 days ago [-]
African or European?
ShinyLeftPad 2 days ago [-]
Doesn't matter, the point is inception!
throwa356262 2 days ago [-]
Was not expecting to see Adam in an AI post!
For me, he is the opposite of slop (AI or otherwise). This is the kind of guy that writes an operating system for your toaster and leaves enough resources free to run DOOM.
bot403 2 days ago [-]
Now do the equivalent of just in time compilation. Claude sees that we need to respond to a lot of pings and writes a program to compute it instead of thinking about each one.
ForHackernews 2 days ago [-]
>Fun? Oh yeah!
I think this author and I have different definitions of fun.
kmeisthax 2 days ago [-]
I eagerly await the publication of your RFC for IP over Slop Generators.
self_awareness 2 days ago [-]
If you wonder why your Copilot subscription has new limits that you hit every few days, it's because of PhDs like Adam.
Could Adam use a local model hosted on his own box? Probably yes. But he preferred to waste the service we all use just to produce a weak blog post that introduces absolutely no knowledge and serves no other purpose than to tell everyone that the author likes to waste resources and calls it "fun".
> Ridiculous? Yes. Wasteful of tokens? Sure. Fun? Oh yeah!
Do you really think it's fun to be one of these people who are the reason why the rest of us gets more limits?
excuse me? this is a cop out used to justify heinous things done under capitalism
self_awareness 2 days ago [-]
No.
In our lives, the game we play, we can do whatever we like. There are consequences for some things, but generally we can do lots of things.
We can kill people and get away with it. We can also help them.
Should we hate life because it's possible to do really shitty things in life? I don't think so. We should hate the "players" who actually do shitty things.
wolttam 2 days ago [-]
Ok but hating a guy playing with an LLM is a bit extreme. These things are in many ways still just toys (toys that are becoming increasingly useful).
People almost certainly send dumber stuff to Claude than this, and just don’t write blog posts about it.
You could try other providers if Anthropic is too slow/limited, there’s some good alternatives.
(And your anger should probably be directed at Anthropic who hasn’t put in “better controls”, not the masses for not using the tool in the way you think they should. Hating rarely leads to anything productive.)
self_awareness 2 days ago [-]
I'm not angry, I'm just disappointed. Hating is just terminology I've used to reply to the parent poster. Personally I don't think hating anything will solve any problem.
True, people send dumber stuff to LLM, but some of them have the decency not to brag about it. But if they brag, then the common sense I imagine would be to tell them they are currently conducting a shitty thing. Yet on HN over 100 people told him that it's a cool idea. Also you seem to also be one of the "if the system allows it, then we all should do it" mentality people.
I mean you could argue that buying stuff and immediately throwing it to trash is perfectly fine, and if I'm mad about it, then it's my problem because "the person doesn't use the thing the same way I would imagine it to be used", but this argument just sounds silly to me. I know I'm right.
We have brains for a reason. We should use our free will, not offload any thinking to the system we live in.
Unless we don't have free will. I believe some of us don't have it, based on what I read.
tekno45 2 days ago [-]
Talking about "We have brains and don't offload your thinking" while admonishing someone because you think you would have used their tokens better is wild.
self_awareness 1 days ago [-]
You've literally just used the same argument from the previous poster which was even addressed by me in my parent reply. Talking about not thinking!
Aegis_Labs 2 days ago [-]
[flagged]
iluvcommunism 2 days ago [-]
[dead]
Ozzie-D 2 days ago [-]
[dead]
brcmthrowaway 2 days ago [-]
Next up: Claude replacement to handle simdjson processing.
jeremyjh 2 days ago [-]
Perhaps one day, all network services will be provided by LLMs natively. Truly, that would be a day in the future.
pastage 2 days ago [-]
You could read about that in 1992 "A Fire Upon the Deep" by Vernor Vinge. There is prompt injection in communication, in the book certain protocols for information communication can not be deterministic so if someone is too smart you get hacked.
lionkor 2 days ago [-]
"Perhaps" doing enough lifting to participate in a bodybuilder contest, in that sentence
vrighter 2 days ago [-]
why? We already have more efficient specialized hardware.
codezero 2 days ago [-]
I mean, we did decades of JavaScript, so... I mean... anything is possible, right? :)
twoodfin 2 days ago [-]
I’m sorry people aren’t getting it, or are so committed to downvoting humor here they’re tagging the good stuff.
fl7305 2 days ago [-]
Do some people still claim "LLMs are just dumb auto completers"?
Because this seems to disprove that claim pretty convincingly?
mystifyingpoi 2 days ago [-]
Oh, they are. It's just that the harness around it is able to pick up the commands it "autocompletes" and runs them for you. LLM can't run anything, it never could.
fl7305 23 hours ago [-]
What do you mean that the "harness ran the commands"?
It looks to me like the LLM "executed" the logic in pure output tokens, not by using any kind of external tool calls?
mystifyingpoi 22 hours ago [-]
Right, I mean that the logic of creating the packet was in the output tokens, sure. But the actual sending of the packet was done by bash command.
AlienRobot 2 days ago [-]
It proves that code, specifically any code in the form of bytes, is, too, language.
coldcity_again 2 days ago [-]
I like that. And if poetry can be defined as succinct use of language (perhaps a dubious assertion), then code can be poetry.
https://blug.linux.no/rfc1149/writeup/
/skill-creator [or /create-skill] Write an agent skill with code script(s) that use an existing user space IP library that works with your agent runtime, to [...]
ComposioHQ/awesome-claude-skills: https://github.com/ComposioHQ/awesome-claude-skills
anthopics/skills//skill-creator/SKILL.md: https://github.com/anthropics/skills/blob/main/skills/skill-...
/.agents/skills/skill-name/SKILL.md, scripts/{script_name.py,__init__.py}
https://agentskills.io/what-are-skills
Even faster would just to be use code in the first place!
The minimum overhead to doing it with the LLM is the useful question
Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".
Vision language models are an incredible achievement in the generality and usability. But they pay a hefty price in fidelity and speed
Image gets rasterized into smaller pieces (eg 4x4 pixels) and each of those is assigned a token, similarly how text is broken up into tokens. And the whole thing is fed into a single model.
> Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".
that's p much how it works.
1,000 pings, how many correctly ponged?
For me, he is the opposite of slop (AI or otherwise). This is the kind of guy that writes an operating system for your toaster and leaves enough resources free to run DOOM.
I think this author and I have different definitions of fun.
Could Adam use a local model hosted on his own box? Probably yes. But he preferred to waste the service we all use just to produce a weak blog post that introduces absolutely no knowledge and serves no other purpose than to tell everyone that the author likes to waste resources and calls it "fun".
> Ridiculous? Yes. Wasteful of tokens? Sure. Fun? Oh yeah!
Do you really think it's fun to be one of these people who are the reason why the rest of us gets more limits?
Adam is the practical kind of PhD.
https://en.wikipedia.org/wiki/Adam_Dunkels
In our lives, the game we play, we can do whatever we like. There are consequences for some things, but generally we can do lots of things.
We can kill people and get away with it. We can also help them.
Should we hate life because it's possible to do really shitty things in life? I don't think so. We should hate the "players" who actually do shitty things.
People almost certainly send dumber stuff to Claude than this, and just don’t write blog posts about it.
You could try other providers if Anthropic is too slow/limited, there’s some good alternatives.
(And your anger should probably be directed at Anthropic who hasn’t put in “better controls”, not the masses for not using the tool in the way you think they should. Hating rarely leads to anything productive.)
True, people send dumber stuff to LLM, but some of them have the decency not to brag about it. But if they brag, then the common sense I imagine would be to tell them they are currently conducting a shitty thing. Yet on HN over 100 people told him that it's a cool idea. Also you seem to also be one of the "if the system allows it, then we all should do it" mentality people.
I mean you could argue that buying stuff and immediately throwing it to trash is perfectly fine, and if I'm mad about it, then it's my problem because "the person doesn't use the thing the same way I would imagine it to be used", but this argument just sounds silly to me. I know I'm right.
We have brains for a reason. We should use our free will, not offload any thinking to the system we live in.
Unless we don't have free will. I believe some of us don't have it, based on what I read.
Because this seems to disprove that claim pretty convincingly?
It looks to me like the LLM "executed" the logic in pure output tokens, not by using any kind of external tool calls?