Alice and Bob walk into a quantum bar ⚛️ 𝐵𝑎𝑟𝑡𝑒𝑛𝑑𝑒𝑟: "Why handshake?" 𝐵𝑜𝑏: "Alice has the key, I’m on #ESP32." Sounds like #funForFriday, but it’s not just a joke. I turned this anecdote into a real demo: 👉https://lnkd.in/epGxnntT Linux server and an ESP32 as client building a post-quantum secure TCP channel in 6 steps 🔐: 1) Alice → Bob: send Kyber (ML-KEM-512) public key (post-quantum KEM) 2) Bob → Alice: encapsulate + send ciphertext 3) Both: decapsulate/derive → shared secret 4) Both: HKDF-SHA256 (extract + expand) → AES-256-GCM key 5) A → B: encrypted record `[IV | ciphertext | tag]` with seq-nr in AAD 6) B → A: encrypted record `[IV | ciphertext | tag]` with seq-nr in AAD This is a lightweight proof-of-concept showing #PQC between a PC and a microcontroller. I used a reference Kyber implementation (no platform optimizations): KEM-512 encapsulation on the ESP32 is about ~10 ms in my test. 𝐍𝐞𝐱𝐭 𝐬𝐭𝐞𝐩𝐬: * Swap in an ESP32-optimized implementation and measure speed vs resource use * Compare handshake and memory cost vs traditional TLS (ECDHE) * Incorporate the key exchange into some existing TLS 1.3 scheme (X25519Kyber512 or similar?) * Add authentication (this minimalist demo is currently vulnerable to MitM) PS: A & B are doomed if the bartender is Mallory, but okay if the bartender is Shor. 😉
David Cermak’s Post
More Relevant Posts
-
Are traditional LLMs slowing you down? Their limited context windows can create bottlenecks when analyzing lengthy documents, risking coherence and reliability in mission-critical tasks. Enter a vLLM that has: ✅ 5x faster responses ✅ 4-5x higher throughput ✅... Read the blog for more 😏 https://hubs.li/Q03KC3tQ0
To view or add a comment, sign in
-
-
System calls are the interface between the user space and the kernel. They are needed for fundamental things like reading a file, and making a network call. But they are also expensive because they need to do things like saving/restoring of registers, page table, and stack. More importantly, they can also make the user space code slower after return from the kernel, and this can be more costly. This happens due to the loss of the microarchitectural state of the process, such as instruction pipeline getting drained, branch predictor buffers getting flushed, etc. In my latest article, I talk about this in detail. The article shows the Linux kernel code which handles system calls and breaks down the performance implication of each step the kernel takes. It is more complicated and nuanced than I can describe here. For example, the performance of newer hardware and kernel might be better than the older ones. And it is not the same across all hardware vendors (e.g. Intel vs AMD). So, check it out. Read it here: https://lnkd.in/g6WR-6Hh
To view or add a comment, sign in
-
-
Go 1.25 just introduced container-aware GOMAXPROCS defaults. For most people outside infra this might sound like a minor runtime detail, but it’s a pretty big deal if you’re running Go apps in Kubernetes or any container platform. Before, Go simply set GOMAXPROCS to match the number of CPU cores on the machine. Which meant that if your container had a CPU limit set lower than the machine’s cores, Go would still try to use more threads than it was allowed. The result: the Linux kernel throttled you in 100ms chunks. That’s wasted cycles and ugly tail latency spikes. Now Go looks at the container CPU limits and adjusts automatically. No more mismatched defaults, no more silent throttling ruining your p99. If the orchestrator changes the limit on the fly, Go adapts on the fly too. It’s one of those changes that feels small, but in practice it makes Go apps more predictable and less surprising out of the box. Less time debugging weird latency, more time building the actual product. I’m curious, how often do you explicitly tune GOMAXPROCS in your services, or do you mostly let the runtime handle it?
To view or add a comment, sign in
-
-
You would have encountered unrealistically large packet sizes while analyzing tcpdump! The reason isn't a network glitch, it's Generic Receive Offloading (GRO), and it's a huge win for performance. GRO is a software technique that significantly optimizes CPU usage by reducing the number of individual packets the CPU has to process. It combines similar packets into one large packet before they are sent up the network stack. This dramatically reduces the overhead associated with per-packet processing. Point to note is, the cost of processing a packet is not proportional to its size. The work of inspecting headers, performing checksums, and passing data up the stack is relatively constant. By combining many small packets, GRO amortizes this fixed cost over a much larger amount of data. In high throughput scenarios this allows the system to handle much more data with the same CPU resources, improving overall performance and also allowing the CPU to focus on application level tasks rather than spending its cycles on packet-by-packet overhead. PS: You can check if it's enabled on your machines by executing sudo ethtool -k <interface_name> | grep generic-receive-offload PPS: Also, tcpdump captures packets at a higher level in the network stack, after the kernel has already received the packets and has performed optimizations like GRO. It does not capture packets directly from the NIC's ring buffer :) #Networking #Linux #Performance
To view or add a comment, sign in
-
🔍 Most people working with LLMs run into the same trap: "Why does it keep forgetting?" The answer: context ≠ memory ≠ knowledge. In this article I break down: * Context windows: why overflow matters (and how attackers exploit it) * Memory: from Q CLI rules and Claude CLAUDE.md to Bedrock Agents & AgentCore * Knowledge/RAG: how to scale without blowing your token budget Plus a quick decision guide so teams know when to use rules, memory, or retrieval.
To view or add a comment, sign in
-
GeNetix V3 firmware is here. Packed with new features designed for installations, including: • Lock Scene – prevent accidental overwrites • Scene Priority – console takes control when connected, node scenes take over when it’s not • Triggering flexibility – activate stored scenes via DMX input, wall plates, UDP commands, or MIDI Take a closer look at how these updates expand control and reliability for your installs
To view or add a comment, sign in
-
Peplink 8.5.3 Firmware dropped GA with nearly 17 pages of features and improvements. Important, if your using the sim injector, make sure you upgrade to 1.2.5 before upgrading to 8.5.3. We've been testing these in our connectivity lab for weeks. A couple of cool features: 1. [Remote User Access] Added DHCP reservation support based on Remote User Access usernames 2. [Switch Controller] Added option in Switch Controller to preserve existing Port and VLAN settings when switches come online 3. [BGP] Added support for BGP over route-based IPsec 4. [Docker] Added Docker support for --device /dev/net/tun to enable tunneling inside containers 5. [Cellular & 5GN] Added support for site surveys that scan nearby cellular towers using InControl’s Connection Test feature And many more, read the full details
To view or add a comment, sign in
-
Reliability Monitor > Event Viewer (for quick triage) Why? Timeline of crashes/updates with links to details, it's more faster to spot red Xs and warnings than go to Event Viewer. Reliability Monitor flow Launch: Win+R → perfmon /rel (or search “Reliability Monitor”). Scan the graph: Look for clusters around recent updates/driver installs. Drill in: Click a red X → View technical details for the Faulting module and Exception code. Act: Roll back a driver/update, repair the app, or run SFC/DISM if system files look implicated. Cross-check quickly if needed: Open Event Viewer → Windows Logs → Application and System to confirm the same error/time and informations.
To view or add a comment, sign in
-
If anyone is interested in developing their skills in Xyce Parallel Electronic Simulator , a quick thought based on my experience that might be helpful. 💬 Here are some tips for developing this skill: Xyce is an Open-Source Parallel Electronic Simulator and is a SPICE-compatible, developed by Sandia National Laboratories and executes efficiently on a variety of architectures including single processor workstations. I have used it to do full-chip simulations with output files up to 500G taking 30 hours or so and think it is just brilliant! Do a "Format C:" on your PC then rebuild it with Ubuntu 22.04 LTE as you will need Linux to run Xyce, Electric (and maybe Magic, as well as supporting Apache2 in you favorite web-server). I picked up a six year old 12-core Dell workstation for £345 and am still using it to write this.
To view or add a comment, sign in
-
Falcon Boot, A way of booting embedded applications without which a product might not be a "shippable product". Have you wondered, why some good Linux products (Excluding Computers) don't show a boot menu nor any boot logs. Also that their boot-times are less than 1-2 seconds. The reason is optimising boot for user experience. Slow boot-times means missing an opportunity to capture a moment and much more. To boot using falcon, some changes in compile-config were needed, some features had to be removed to make space. After using u-boot's documentation , bootlin's slides and some help from the u-boot maintainers , Voila. Although u-boot optimisation might not give more than a second of improvement. But in the bigger scheme, it might mean saving a lot of energy from sleep-wake cycles, no need of carrying DRAM current at sleep? CPU can sleep-tight. Total boot-time( from reset to shell): 2.8Seconds. Let's hope by the new year that fraction vanishes. :) #Linux #Embedded #EmbeddedLinux #uBoot #LinuxFoundation https://lnkd.in/g_UFZjtH
To view or add a comment, sign in