Why do we need MMU in the kernel? 1.Memory protection between multiple processes . Whenever a process runs the kernel assigns a dedicated virtual addresses space ,at run time the MMu maps/translates virtual addresses to physical memory in RAM. hence it avoids overlapping of other processes code segment.which helps to avoid incorrect data or code access.
Understanding the role of MMU in kernel memory protection
More Relevant Posts
-
🚀 After eight months of hard work, we removed the Rust engine binaries from Prisma ORM! Here's why: 😍 Reduced bundle size by ~90% ⚡️ Faster queries (on avg ~3.48x faster in our benchmark) 🐾 Lower CPU footprint 💡 Less deployment complexity 🤝 Easier open-source contributions 💡 The Rust-free Prisma ORM is ready for production as of v6.16.0. You can enable it by: ✅ setting the `engineType` option on the `generator` block ✅ installing the driver adapter for your database 👉 Learn more in the docs: https://lnkd.in/dM95zuGp
To view or add a comment, sign in
-
-
For those of you who use Hexagonal-/Onion-Architecture in your applications: 1. What’s your primary reason/goal for using it? 2. How do you assign code to the abstraction elements (ports, adapters, rings)?
To view or add a comment, sign in
-
This talk will cover an overview of hardware architecture around detecting and correcting memory errors, software support for handling them and other types of hardware errors, and stories of memory errors in the real world. https://lnkd.in/eSftd2Mu
Flipping Bits Memory Errors in the Machine by Taylor Campbell
https://www.youtube.com/
To view or add a comment, sign in
-
Most of the time I am confused between MMAP AND MALLOC in my initial days. mmap and malloc both give you memory, but they work very differently. MALLOC() User-space library function (from libc). Allocates memory from the heap. Typically uses brk/sbrk system calls to increase/decrease heap size. Small/medium allocations are usually handled by malloc. Faster for small allocations since it manages memory internally without always calling the kernel. Example: int *arr = malloc(100 * sizeof(int)); Allocates ~400 bytes on the heap. MMAP() System call that maps pages into a process’s address space. Can allocate anonymous memory (like malloc), OR map files/devices. Used for large allocations (bypasses heap). Memory is page aligned and managed by the kernel. Can share memory between processes (if MAP_SHARED is used). Example: int *arr = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); Allocates one page (4 KB).
To view or add a comment, sign in
-
New release Ultralytics v8.3.202 | TFLite INT8 per-channel quantization fix 🚀 More accurate YOLO11 TFLite INT8 models with per-channel quantization and leaner artifacts; plus faster distributed tuning and steadier CI for smoother edge deployments. Minor updates: ✅ Per‑channel INT8 fix and disabled `batchmatmul_unfold` reduce TFLite artifact size for edge devices ✅ Distributed Tuner seeds immediately when MongoDB collection exists, improving multi‑worker starts ✅ CI stability: macOS runner pinned to macOS‑26 for consistent builds Ultralytics v8.3.202 release notes ➡️ Release v8.3.202 https://lnkd.in/dmgGD7xs
To view or add a comment, sign in
-
🚀 Day 144 of #gfg160 ✅ 🔹 Problem Solved: Directed Graph Cycle Classic back edge detection in directed graphs! DFS with recursion stack tracking to identify cycles — crucial for dependency analysis and deadlock detection! 👨💻 Approach: 🔸 DFS with Recursion Stack: Track nodes in current path to detect back edges 🔸 Visited Array: Mark explored nodes to avoid redundant traversals 🔸 Back Edge Detection: If we reach a node already in recursion stack, cycle found 🔸 Multi-Component Check: Test all disconnected components separately 🔸 Stack Management: Add/remove nodes from recursion stack during DFS 🔸 Early Termination: Return immediately when first cycle is detected 🔸 Key Insight: Back edges in directed graphs always indicate cycles ⏱ Time Complexity: O(V + E) 💾 Space Complexity: O(V) #GeeksforGeeks #gfg160 #geekstreak2025
To view or add a comment, sign in
-
-
Through digital signal processing in C, I have now created an algorithm that computes the Inverse Discrete Fourier Transform -- while the Discrete Fourier Transform dealt with converting time signals into frequency signals, Inverse Discrete Fourier Transforms do the opposite -- they convert the frequency signals into time signals. Therefore, the output ReX and ImX are converted back into the original input signal. You can see the plotted results in the image below (made with gnuplot). Notice how the output_idft.dat and the input_signal.dat are the same, representing the accuracy of the algorithm. And the github repo here: https://lnkd.in/e7tMJrMK
To view or add a comment, sign in
-
-
Stitching together cache lines with magic bits 🦄 From the ordering of 32 uint16_ts we get ~117 “magic bits.” Last time, I used them to squeeze in more payload. But those bits don’t have to be payload. They can also encode structure. Specifically: - prev pointer = 48b (typically enough, in practice) - next pointer = 48b - tags = 21b Total = 117b Which means you can build a doubly linked list inside a cache line. Not just storing more elements — but stitching cache lines together. That’s the unicorn powering these “magic bits.” / {}
To view or add a comment, sign in
-
-
Advanced system design with bit encoding and binary search optimization! Problem: Implement Router My Solution: Multi-structure system! Hash maps for packet storage, queue for FIFO forwarding, sorted arrays for range queries. Bit encoding creates unique keys from (source, dest, timestamp). The sophistication: Bit encoding: pack 3 values into 64-bit key Binary search: O(log n) range counting with bounds Coordinated updates: maintain consistency across structures Memory management: capacity-aware with FIFO eviction Key insight: Hard system design problems require orchestrating multiple data structures, each optimized for specific operations. No single structure can efficiently handle all requirements! System design mastery: coordinate multiple structures for complex requirements! #LeetCode #SystemDesign #HardProblems #BitEncoding #BinarySearch #CodingInterview #100DaysOfCode
To view or add a comment, sign in
-
-
Algorithms I’d master if I had to design systems that scale: 1.Consistent Hashing 2.Load Balancing Algorithms 3.Leaky Bucket & Token Bucket 4.Bloom Filters 5.Merkle Trees 6.Quorum Algorithms 7.Leader Election Algorithms 8.Distributed Lock Algorithms 9.Raft / Paxos 10.Gossip Protocol 11.Vector Clocks / Lamport Timestamps 12.Two-Phase / Three-Phase Commit 13.Reservoir Sampling 14.HyperLogLog 15.CRDTs 16.Sharding Algorithms 17.MapReduce 18.Tail Latency Reduction (Hedged Requests) 19.Circuit Breaker Pattern 20.Split-Brain Resolution Algorithms 21.Load Shedding Algorithms 22.Byzantine Fault Tolerance 23.Skip Lists
To view or add a comment, sign in