💎 Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development — to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isn’t about compliance. It’s not about ticking off checkboxes. And it’s not about plugging in accessibility overlays or AI engines either. It’s about *designing* with a wide range of people in mind — from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. It’s making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually don’t get time to work on features which don’t have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Don’t ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Don’t mistake smiling and nodding for support — establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, ↳ https://lnkd.in/ehGivAdY, by 🦞 Todd Libby ↳ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility
Modular Design Systems
Explore top LinkedIn content from expert professionals.
-
-
If you’re working with Kubernetes, here are 6 scaling strategies you should know — and when to use each one. Before we start — why should you care about scaling strategies? Because when Kubernetes apps face unpredictable demand, you need scaling mechanisms in place to keep them running smoothly and cost-effectively. Here are 6 strategies worth knowing: 1. Human Scaling ↳ Manually adjust pod counts using kubectl scale. ↳ Direct but not automated. When to use ~ For debugging, testing, or small workloads where automation isn’t worth it. 2. Horizontal Pod Autoscaling (HPA) ↳ Changes pod count based on CPU/memory usage. ↳ Adds/removes pods as workload fluctuates. When to use ~ For stateless apps with variable load (e.g., web apps, APIs). 3. Vertical Pod Autoscaling (VPA) ↳ Adjusts CPU/memory requests for existing pods. ↳ Ensures each pod gets the right resources. When to use ~ For steady workloads where pod count is fixed, but resource needs vary. 4. Cluster Autoscaling ↳ Adds/removes nodes based on pending pods. ↳ Ensures pods always have capacity to run. When to use ~ For dynamic environments where pod scheduling fails due to lack of nodes. 5. Custom Metrics Based Scaling ↳ Scale pods using application-specific metrics (e.g., queue length, request latency). ↳ Goes beyond CPU/memory. When to use ~ For workloads with unique performance signals not tied to infrastructure metrics. 6. Predictive Scaling ↳ Uses ML/forecasting to scale in advance of demand. ↳ Tries to prevent traffic spikes before they happen. When to use ~ For workloads with predictable traffic patterns (e.g., sales events, daily peaks). Now know this — scaling isn’t one-size-fits-all. The best teams often combine multiple strategies (for example, HPA + Cluster Autoscaling) for resilience and cost efficiency. What did I miss? • • • If you found this useful.. 🔔 Follow me (Vishakha) for more Cloud & DevOps insights ♻️ Share so others can learn as well
-
System design interviews can be a daunting part of the hiring process, but being prepared with the right knowledge makes all the difference. This System Design Cheat Sheet covers essential concepts that every engineer should know when tackling these types of questions. Key Areas to Focus On: 1. Data Management: - Cache: Boost read operation speeds with caching mechanisms like Redis or Memcached. - Blob/Object Storage: Efficiently handle large, unstructured data using systems like S3. - Data Replication: Ensure data reliability and fault tolerance through replication. - Checksums: Safeguard data integrity during transmission by detecting errors. 2. Database Selection: - RDBMS/SQL: Best for structured data with strong consistency (ACID properties). - NoSQL: Ideal for large volumes of unstructured or semi-structured data (MongoDB, Cassandra). - Graph DB: For interconnected data like social networks and recommendation engines (Neo4j). 3. Scalability Techniques: - Database Sharding: Partition large datasets across multiple databases for scalability. - Horizontal Scaling: Scale out by adding more servers to distribute the load. - Consistent Hashing: A technique for efficient distribution of data across nodes, essential for load balancing. - Batch Processing: Use when handling large amounts of data that can be processed in chunks. 4. Networking: - CDN: Distribute content globally for faster access and lower latency (e.g., Cloudflare, Akamai). - Load Balancer: Spread traffic across multiple servers to ensure high availability. - Rate Limiter: Prevent overloading by controlling the rate of incoming requests. - Redundancy: Design systems to avoid single points of failure by duplicating components. 5. Protocols & Queues: - Message Queues: Asynchronous communication between microservices, ideal for decoupling services (RabbitMQ, Kafka). - API Gateway: Control API traffic, manage rate limiting, and provide a single point of entry for your services. - Gossip Protocol: Efficient communication in distributed systems by periodically exchanging state information. - Heartbeat Mechanism: Monitor the health of nodes in distributed systems. 6. Modern Architecture: - Containerization (Docker): Package applications and dependencies into containers for consistency across environments. - Serverless Architecture: Run functions in the cloud without managing servers, focusing entirely on the code (e.g., AWS Lambda). - Microservices: Break down monolithic applications into smaller, independently scalable services. - REST APIs: Build lightweight, maintainable services that interact through stateless API calls. 7. Communication: - WebSockets: Real-time, bi-directional communication between client and server, commonly used in chat applications, live updates, and collaborative tools. Save this post and use it as a quick reference for your next system design challenge!
-
A lot of accessibility issues can be already foreseen and prevented in the design phase. You can save time checking and documenting accessibility mockups. In this article, I cover color usage, contrast ratios, text resizing, font legibility, target sizes, form elements, focus order, complex components keyboard interactions, skip links, headings, landmarks, and alternative text for images. The tips in here are focused in Figma, but can be applied to other tools. https://lnkd.in/eu8YuWyF
-
Agentic AI Design Patterns are emerging as the backbone of real-world, production-grade AI systems, and this is gold from Andrew Ng Most current LLM applications are linear: prompt → output. But real-world autonomy demands more. It requires agents that can reflect, adapt, plan, and collaborate, over extended tasks and in dynamic environments. That’s where the RTPM framework comes in. It's a design blueprint for building scalable agentic systems: ➡️ Reflection ➡️ Tool-Use ➡️ Planning ➡️ Multi-Agent Collaboration Let’s unpack each one from a systems engineering perspective: 🔁 1. Reflection This is the agent’s ability to perform self-evaluation after each action. It's not just post-hoc logging—it's part of the control loop. Agents ask: → Was the subtask successful? → Did the tool/API return the expected structure or value? → Is the plan still valid given current memory state? Techniques include: → Internal scoring functions → Critic models trained on trajectory outcomes → Reasoning chains that validate step outputs Without reflection, agents remain brittle, but with it, they become self-correcting systems. 🛠 2. Tool-Use LLMs alone can’t interface with the world. Tool-use enables agents to execute code, perform retrieval, query databases, call APIs, and trigger external workflows. Tool-use design involves: → Function calling or JSON schema execution (OpenAI, Fireworks AI, LangChain, etc.) → Grounding outputs into structured results (e.g., SQL, Python, REST) → Chaining results into subsequent reasoning steps This is how you move from "text generators" to capability-driven agents. 📊 3. Planning Planning is the core of long-horizon task execution. Agents must: → Decompose high-level goals into atomic steps → Sequence tasks based on constraints and dependencies → Update plans reactively when intermediate states deviate Design patterns here include: → Chain-of-thought with memory rehydration → Execution DAGs or LangGraph flows → Priority queues and re-entrant agents Planning separates short-term LLM chains from persistent agentic workflows. 🤖 4. Multi-Agent Collaboration As task complexity grows, specialization becomes essential. Multi-agent systems allow modularity, separation of concerns, and distributed execution. This involves: → Specialized agents: planner, retriever, executor, validator → Communication protocols: Model Context Protocol (MCP), A2A messaging → Shared context: via centralized memory, vector DBs, or message buses This mirrors multi-threaded systems in software—except now the "threads" are intelligent and autonomous. Agentic Design ≠ monolithic LLM chains. It’s about constructing layered systems with runtime feedback, external execution, memory-aware planning, and collaborative autonomy. Here is a deep-dive blog is you would like to learn more: https://lnkd.in/dKhi_n7M
-
Navisworks can now create Autodesk Data Exchanges! That’s a big step forward for interoperability. Data Exchanges use a neutral format that works across any application with a connector, including: Autodesk tools - Revit, Inventor, Civil3D, Navisworks, AutoCAD and Dynamo Other design tools - Rhino, Grasshopper, SolidWorks and Tekla Business applications - Power Automate and Power BI Once created, an exchange can be shared between these applications without needing to convert files. What’s New: You can now create Data Exchanges directly from Navisworks Manage. 💡That means any format supported by Navisworks can now feed into other connected tools. Here I took advantage of another recently released feature: the scan to mesh workflow using ReCap Pro. ReCap → Navisworks → PowerBI This allows point clouds to be converted to segmented meshes, and exported as native Navisworks and Revit files. In PowerBI I can now dashboard all design components across a project, not just Revit files. Think Point Clouds, SketchUp, MicroStation and more. Check out what applications are on the data exchange roadmap and submit your ideas here: https://lnkd.in/g3TykV9f For more check out my previous post on running clash detection on scans in ACC: https://lnkd.in/g85jsTUY See the Data Exchange help page to join the Beta and get set up: https://lnkd.in/gestwGX4 #Autodesk #RealityCapture #Revit #PowerBI
-
Ever wished you could just reuse the good parts of your app instead of rebuilding them from scratch every time? 🤔 Yeah, me too. 🧑💻 That’s exactly what led me to explore Bit and build a completely composable Todo app, where every piece, from the UI to the #GrpahQL server, is an independent, versioned, and reusable component. 💜 👉 I just shared the full breakdown here: https://lnkd.in/eW6MAVeC Why does composable architecture matter so much today? Because it lets you: 🔅Ship faster without being stuck in huge, messy codebases. 🔅Reuse your own components across ANY project. 🔅Update a feature once, and have it reflect everywhere it’s used. 🔅Collaborate with your team without stepping on each other’s toes. 💟 Build real micro-frontends (the easy way) or scale modular monoliths neatly. 🤯 Bit makes it ridiculously easy to create, version, share, and evolve components independently. You get full dev environments for each component (hello isolated testing 👋), visual dependency graphs, and painless exports to any app. 🔥 In my blog, Shown a working Todo app where: ✅ Hooks, UI components, and the backend server are all separate components. ✅Every component can be installed in any other app or improved independently. ✅Changes to one piece trigger auto-detection of what else needs updating. If you're curious about how to stop copy-pasting code forever and start working smarter, check it out 👉 https://lnkd.in/eW6MAVeC #SoftwareDevelopment #ComposableArchitecture #WebDev #Bit #Frontend #MicroFrontends #DeveloperExperience
-
IT #Integrations and #Enterprise #Architecture Dilemma? As per a leading research company, over 70% of #CIOs claim that complexity in IT integration is their top challenge, particularly when combining legacy systems with new cloud platforms. 88% of organizations experience integration issues related to #data silos, leading to inefficiencies and delays in decision-making. When navigating IT complexity, balancing architecture and integration is critical. Here's a simple way forward: 1. Use #Modular Architecture to break down large systems into smaller, manageable components. Use #microservices or #API-driven architecture, where each module handles a specific function, making changes easier without disrupting the entire system. 2. Interoperability First ensures different systems communicate seamlessly. Adopt standardized protocols (like #REST, #SOAP, or #GraphQL) for easier integration and scalability. 3. Hybrid on-premise and #cloud solutions provides flexibility. Use cloud for agility and innovation, while retaining mission-critical systems on-premises, integrated via #middleware. 4. Integration via #APIs simplifies communication between disparate systems. Leverage an #API Gateway to connect various systems, enabling agility and faster response to change. 5. Data-Centric Focus has the most complexity when it comes to integration. Implement a central data lake or #warehouse with well-defined data governance policies, allowing smooth access to accurate, real-time data. 6. Continuous Alignment with Business Goals avoids IT silos which is typically seen. Regularly evaluate your architecture and integration strategy to ensure it aligns with evolving business needs. Instill an Architecture Review Board (ARB) process to ascertain the complexity and regulate the changes harmonizing the enterprise ecosystem. Quick reference to ARB: https://lnkd.in/dDC-6eUn #digital #architecture #ITStrategy #ITComplexity #Tech #ITInfrastructure #Techleadership
-
PCB testing and firmware development for embedded devices play a crucial role in ensuring the reliability and functionality of electronic products. In this technical post, i will explore a systematic approach to PCB testing and firmware development, covering steps from initial board bring-up to thorough peripheral testing, all while considering the importance of OTA firmware updates for efficient maintenance in the field. 1. PCB Testing: From BBT to ABT PCB testing begins with Basic Bareboard Testing (BBT) after manufacturing, ensuring that all the basic electrical connections are present and the board is free from manufacturing defects. Subsequently, the Assembled Board Testing (ABT) is conducted after PCB assembly to validate the connectivity of components. During this phase, it is essential to ensure that all capacitors are not shorted, as shorts can lead to catastrophic failures. 2. Board Bring-up: Step by Step Once the board's basic integrity is confirmed, the board bring-up process begins. The approach is to populate the board section by section with 0Ohm resistors, allowing the voltage across test points to be measured for verification. 3. Peripheral Testing: Ensuring Individual Functionality After a successful board bring-up, it is essential to test each peripheral individually to ensure proper functionality. This process involves reading data from the controller in each particular section and sending it over the serial port for validation. 4. Code Integration and Parallel Testing This comprehensive testing approach examines how various peripherals interact with each other and ensures that there are no conflicts or unexpected behaviors when running the system as a whole. 5. Rigorous Firmware Testing A robust firmware testing process is essential to detect and eliminate bugs in embedded devices. Developers should conduct exhaustive unit testing, integration testing, and system testing. Employing automated testing frameworks and static code analysis tools can significantly improve the efficiency and effectiveness of this testing phase. 6. Over-the-Air (OTA) Firmware Updates Incorporating OTA firmware updates in embedded devices is a proactive approach to address field issues and update code remotely. While the implementation of OTA updates may have a higher upfront cost, it proves to be cost-effective in the long run by minimizing the need for physical service visits and reducing downtime for customers. Conclusion: Effective PCB testing and firmware development are vital for the success of embedded devices. Starting with BBT and ABT, followed by systematic board bring-up, thorough peripheral testing, and rigorous firmware testing, engineers can ensure the reliability and functionality of their products. Integrating OTA firmware updates further enhances the overall efficiency and long-term maintenance of the devices, benefiting both manufacturers and end-users alike.
-
Understanding the Flow of a Received CAN Message through the AUTOSAR BSW Stack In the AUTOSAR Basic Software (BSW) stack, the handling of received CAN messages involves a systematic flow across several layers. Here's a breakdown: 1. CAN Driver: Upon receiving a CAN message, the CAN Driver fetches the L-PDU from message objects. This action is triggered by the CAN Controller through either an interrupt or polling. The driver then invokes the Can_If_Rx_Indication() function to notify the CAN Interface (CanIf) about the newly received data. Key parameters such as HRH (Handle of the Received Hardware object), Message ID, and a pointer to the SDU (Service Data Unit) are passed. 2. CAN Interface (CanIf): Using the HRH and Message ID, the CanIf determines the corresponding PDU ID (Protocol Data Unit Identifier). It then passes this PDU ID along with the SDU to the PDU Router (PduR). Note: Direct communication between CanIf and upper-layer modules (e.g., COM or DCM) is avoided because upper layers are abstracted from protocol-specific details. 3. PDU Router (PduR): The PduR routes the PDU to the appropriate destination module. For example, if it's a communication-related message, it forwards the information to the COM module. 4. COM Module: Upon receiving the PDU, the COM module decodes it into individual signals and stores these signals in its buffer. The COM module also provides APIs for reading and writing these signals. 5. Application Layer: Software Components (SWCs) in the application layer can access the decoded signals via the Runtime Environment (RTE). The COM module’s APIs facilitate signal read/write operations, enabling seamless integration with the application logic. This structured flow ensures modularity and protocol abstraction, enhancing system reliability and maintainability in automotive software. ---
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development