Showing posts with label chatGPT. Show all posts
Showing posts with label chatGPT. Show all posts

Daily Tech Digest - April 20, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti



The Digital Twin in Automotive: The Update

According to Digital Twin researcher Julian Gebhard, the industry is moving toward integrated federated systems that allow seamless data exchange and synchronization across tools and platforms. These systems rely on semantic models and knowledge graphs to ensure interoperability and data integrity throughout the product development process. By structuring data as semantic triples (e.g. (Car) → (is colored) → (blue)) data is traversable, transforming raw data to knowledge. Furthermore, it becomes machine-readable, an enabler for collaboration across departments making development more efficient and consistent. The next step is to use Knowledge Graphs to model product data on a value level, instead only connecting metadata. They enable dynamic feedback loops across systems, so that changes in one area, such as simulation results or geometry updates, can automatically influence related systems. This helps maintain consistency and accelerates iteration during development. Moreover, when functional data is represented at the value level, it becomes possible to integrate disparate systems such as simulation and CAD tools into a unified, holistic viewer. In this integrated model, any change in geometry in one system automatically triggers updates in simulation parameters and physical properties, ensuring that the digital twin evolves in tandem with the actual product. 


Wait, what is agentic AI?

AI agents are generally better than generative AI models at organizing, surfacing, and evaluating data. In theory, this makes them less prone to hallucinations. From the HBR article: “The greater cognitive reasoning of agentic AI systems means that they are less likely to suffer from the so-called hallucinations (or invented information) common to generative AI systems. Agentic AI systems also have [a] significantly greater ability to sift and differentiate information sources for quality and reliability, increasing the degree of trust in their decisions.” ... Agentic AI is a paradigm shift on the order of the emergence of LLMs or the shift to SaaS. That is to say, it’s a real thing, but we’re not yet close to understanding exactly how it will change the way we live and work just yet. The adoption curve for agentic AI will have its challenges. There are questions wherever you look: How do you put AI agents into production? How do you test and validate code generated by autonomous agents? How do you deal with security and compliance? What are the ethical implications of relying on AI agents? As we all navigate the adoption curve, we’ll do our best to help our community answer these questions. While building agents might quickly become easier, solving for these downstream impacts is still incomplete.


Contract-as-Code: Why Finance Teams Are Taking Over Your API Contracts

Forward-thinking companies are now applying cloud native principles to contract management. Just as infrastructure became code with tools like Terraform and Ansible, we’re seeing a similar transformation with business agreements becoming “contracts-as-code.” This shift integrates critical contract information directly into the CI/CD pipeline through APIs that connect legal document management with operational workflows. Contract experts at ContractNerds highlight how API connections enable automation and improve workflow management beyond what traditional contract lifecycle management systems can achieve alone. Interestingly, this cloud native contract revolution hasn’t been led by legal departments. From our experience working with over 1,500 companies, contract ownership is rapidly shifting to finance and operations teams, with CFOs becoming the primary stakeholders in contract management systems. ... As cloud native architectures mature, treating business contracts as code becomes essential for maintaining velocity. Successful organizations will break down the artificial boundary between technical contracts (APIs) and business contracts (legal agreements), creating unified systems where all obligations and dependencies are visible, trackable, and automatable.


ChatGPT can remember more about you than ever before – should you be worried?

Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience," he says. But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security. ... OpenAI stresses that users can still manage memory – delete individual memories that aren't relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won't be used to build new ones either. However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It's often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.” ... “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn't rolled out globally yet.


How to deal with tech tariff terror

Are you confused about what President Donald J. Trump is doing with tariffs? Join the crowd; we all are. But if you’re in charge of buying PCs for your company (because Windows 10 officially reaches end-of-life status on Oct. 14) all this confusion is quickly turning into worry. Before diving into what this all means, let’s clarify one thing: you will be paying more for your technology gear — period, end of statement. ... As Ingram Micro CEO Paul Bay said in a CRN interview: “Tariffs will be passed through from the OEMs or vendors to distribution, then from distribution out to our solution providers and ultimately to the end users.” It’s already happening. Taiwan-based computing giant Acer’s CEO, Jason Chen, recently spelled it out cleanly: “10% probably will be the default price increase because of the import tax. It’s very straightforward.” When Trump came into office, we all knew there would be a ton of tariffs coming our way, especially on Chinese products such as Lenovo computers, or products largely made in China, such as those from Apple and Dell. ... But wait! It gets even murkier. Apparently that tariff “relief” is temporary and partial. US Commerce Secretary Howard Lutnick has already said that sector-specific tariffs targeting electronics are forthcoming, “probably a month or two.” Just to keep things entertaining, Trump himself has at times contradicted his own officials about the scope and duration of the exclusions.


AI Is Essential for Business Survival but It Doesn’t Guarantee Success

Li suggests companies look at how AI is integrated across the entire value chain. "To realize business value, you need to improve the whole value chain, not just certain steps." According to her, a comprehensive value chain framework includes suppliers, employees, customers, regulators, competitors, and the broader marketplace environment. For example, Li explains that when AI is applied internally to support employees, the focus is often on boosting productivity. However, using AI in customer-facing areas directly affects the products or services being delivered, which introduces higher risk. Similarly, automating processes for efficiency could influence interactions with suppliers — raising the question of whether those suppliers are prepared to adapt. ... Speaking of organizational challenges, Li discusses how positioning AI in business and positioning AI teams in organizations is critical. Based on the organization’s level of readiness and maturity, it could have a centralized or distributed, or federated model, but the focus should be on people. Thereafter, Li reminds that the organizational governance processes are related to its people, activities, and operating model. She adds, “If you already have an investment, evaluate and adjust your investment expectations based on the exercise.”


AI Regulation Versus AI Innovation: A Fake Dichotomy

The problem is that institutionalization without or with poor regulation – and we see algorithms as institutions – tends to move in an extractive direction, undermining development. If development requires technological innovation, Acemoglu, Johnson, and Robinson taught us that inclusive institutions that are transparent, equitable, and effective are needed. In a nutshell, long-term prosperity requires democracy and its key values. We must, therefore, democratize the institutions that play such a key role in shaping our contexts of interaction by affecting individual behaviors with collective implications. The only way to make algorithms more democratic is by regulating them, i.e., by creating rules that establish key values, procedures, and practices that ought to be respected if we, as members of political communities, are to have any control over our future. Democratic regulation of algorithms demands forms of participation, revisability, protection of pluralism, struggle against exclusion, complex output accountability, and public debate, to mention a few elements. We must bring these institutions closer to democratic principles, as we have tried to do with other institutions. When we consider inclusive algorithmic institutions, the value of equality plays a crucial role—often overlapping with the principle of participation. 


The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI Tools

The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.” If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available which is often via an open tab on their browser. There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.


The Rise of the AI-Generated Fake ID

The rise of AI-generated IDs poses a serious threat to digital transactions for three key reasons.The physical and digital processes businesses use to catch fraudulent IDs are not created equal. Less sophisticated solutions may not be advanced enough to identify emerging fraud methods. With AI-generated ID images readily available on the dark web for as little as $5, ownership and usage are proliferating. IDScan.net research from 2024 demonstrated that ​​78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Left unchallenged, AI fraud will damage consumer trust, purchasing behavior, and business bottom lines. Hiding behind the furor of nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb suppliers rely on PDF417 and ID image generators, using a degree of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for anyone to cobble together a quality fake ID image and a synthetic identity. To deter potential AI-generated fake ID buyers from purchasing, the identity verification industry needs to demonstrate that our solutions are advanced enough to spot them, even as they increase in quality.


7 mistakes to avoid when turning a Raspberry Pi into a personal cloud

A Raspberry Pi may seem forgiving regarding power needs, but undervaluing its requirements can lead to sudden shutdowns and corrupted data. Cloud services that rely on a stable connection to read and write data need consistent energy for safe operation. A subpar power supply might struggle under peak usage, leading to instability or errors. Ensuring sufficient voltage and amperage is key to avoiding complications. A strong power supply reduces random reboots and performance bottlenecks. When the Pi experiences frequent resets, you risk damaging your data and your operating system’s integrity. In addition, any connected external drives might encounter file system corruption, harming stored data. Taking steps to confirm your power setup meets recommended standards goes a long way toward keeping your cloud server running reliably. ... A personal cloud server can create a false sense of security if you forget to establish a backup routine. Files stored on the Pi can be lost due to unexpected drive failures, accidents, or system corruption. Relying on a single storage device for everything contradicts the data redundancy principle. Setting up regular backups protects your data and helps you restore from mishaps with minimal downtime. Building a reliable backup process means deciding how often to copy your files and choosing safe locations to store them. 

Daily Tech Digest - March 13, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein


Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and people. Data must be treated as a strategic asset with robust quality, privacy and security standards," Simha said. Along with responsible AI, responsible data management is equally crucial. When implemented effectively, data privacy, regulatory compliance, bias and security do not pose issues to an AI-first organization. Yeo described the AI-first approach as both a journey and a destination. "Just using AI tools doesn't make you AI-first. Organizations must explore AI's full potential." He compared today's AI evolution to the early days of the internet. "Decades ago, businesses knew they had to go online but didn't know how. Now, if you're not online, you're obsolete. AI is following the same trajectory - it will soon be indispensable for business success." ... Simha stressed the importance of enterprise architecture in AI deployment. "AI success depends on how well data flows across an organization. Organizations must select the right architecture patterns - real-time data processing requires a Kappa architecture, while periodic reporting benefits from a Lambda approach. A well-designed data foundation is crucial," Simha said. As AI adoption grows, ethical concerns and regulatory compliance remain critical considerations. 


From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a business is doing all it can to mitigate its risks. On their own, audits can fall short of driving full GRC maturity for several reasons ... Auditors are generally outsiders to the businesses they audit — which is good in the sense that it makes them objective evaluators. But it can also lead to situations where they have a limited understanding of what's really going on within a company's GRC practices and are beholden to the information provided by the company's team members on the other side of the assessment table. They may not ask the questions needed to gain adequate understanding to assess and find gaps, ultimately overlooking pitfalls that only insiders know about, and which would become obvious only following a higher degree of scrutiny than a standardized audit. ... But for companies that have made advanced GRC investments, such as automations that pull data from across a diverse set of disparate systems, deeper scrutiny will help validate the value that these investments have created. It may also uncover risk management weak points that the business is overlooking, allowing it to strengthen its GRC program even further. It's generally OK, by the way, if your business submits itself to a high degree of risk management scrutiny, only to fail the assessment because its controls are not as robust as it expected. 


How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail. Where ChatGPT succeeds -- and does so very well -- is in helping someone who already knows how to code to build specific routines and get tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well. Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won't be able to write code containing the nuances for specific or complex problems that require deep experience. Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks. ... But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, "What's wrong with this code?"


How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations. ... LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance. “If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said. The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.


Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert misleading data into an AI agent's memory bank, which the model later relies on to answer unrelated queries from other users. Researchers tested Minja on three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These include RAP, a ReAct agent with retrieval-augmented generation that integrates past interactions into future decision-making for web shops; EHRAgent, a medical AI assistant designed to answer healthcare queries; and QA Agent, a custom-built question-answering model that reasons using Chain of Thought and is augmented by memory. A Minja attack on the EHRAgent caused the model to misattribute patient records, associating one patient's data with another. In the RAP web shop experiment, a Minja attack tricked the AI into recommending the wrong product, steering users searching for toothbrushes to a purchase page for floss picks. The QA Agent fell victim to manipulated memory prompts, producing incorrect answers to multiple-choice questions based on poisoned context. Minja operates in stages. An attacker interacts with an AI agent by submitting prompts that contain misleading contextual information. Referred to as indication prompts, they appear to be legitimate but contain subtle memory-altering instructions. 


CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond technical scores and consider contextual risk factors that impact operations related to patient care. This can include identifying devices in critical care areas, legacy devices close to or past their end-of-life status, where any insecure communication protocols are, and how sensitive personal information is being stored,” Greenhalgh added. ... “For CISOs, the priority should be proactive engagement. First, implement real-time vulnerability tracking and ensure security patches can be deployed quickly without disrupting device functionality. Medical device security must be continuous—not just a checkpoint during development or regulatory submission. Second, regulatory alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability monitoring, coordinated disclosure policies, and robust software patching strategies. Automating security processes—whether for SBOM (Software Bill of Materials) management, dependency tracking, or compliance reporting—reduces human error and improves response times. An SBOM is valuable not just for compliance but as a tool for tracking and mitigating vulnerabilities throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops explained.


Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to address specific skill gaps and enhance employee development. However, AI isn’t infallible, and it’s crucial for banks to implement tools that not only support learning but also foster a reliable and effective training environment. Striking the right balance between AI-driven training and human oversight ensures that these tools enhance employee growth without compromising accuracy or effectiveness. ... Experiential learning has long been a cornerstone of learning and development. Students, for example, who participate in experiential learning often develop a deeper understanding of the material and achieve statistically better outcomes than those who do not. While AI may not perfectly replicate a customer’s response, it provides new employees with a valuable opportunity to practice handling complex issues before interacting with real customers. AI-powered versions of these trainings can make it more accessible, allowing more employees to benefit. ... Many employees find it challenging to incorporate AI into their daily tasks and may need guidance to understand its value, especially in managing customer interactions. Some may also be resistant, fearing that AI could eventually replace their jobs, Huang says.


The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate are the ones who shape the future. Alan Kay’s words resonate strongly in the modern era, where software, artificial intelligence, and digital transformation continue to drive change across industries. ... “A Platform is a curated experience for engineers (the platform’s customers)” is a quote from the Team Topologies book. It is excellent and doesn’t contradict the platform business way of thinking, but it only calls out one side of the producer/consumer model. This is precisely the trap I fell into. When I worked with platform builders, we focused almost entirely on the application teams that consumed platform services. We rapidly became the blocker to those teams, just like the SRE and DevOps teams that came before us. We couldn’t onboard capabilities and features fast enough, meaning we were supporting the old ways while trying to build the new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our interview for his Platform Engineering Day talk: “We have since been set four challenges by leadership that I talk about: do things faster, do things simpler, enable inner sourcing, and deliver centralized capabilities in a self-service way… Our inner sourcing model will allow us to have multiple teams working on our platform… They are empowered to start contributing changes.”


Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for improved sustainability. However, the definition of a data center in space remains fluid, shaped by current technological limitations and evolving industry perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told Data Center Knowledge that his firm works from the definitions of a data center from industry standards bodies including the Uptime Institute and the Building Industry Consulting Service International (BICSI). ... Axiom Space plans to deploy larger ODC infrastructure in the coming years that are more similar to terrestrial data centers in terms of utility and capacity. The goal is to develop and operationalize terrestrial-grade cloud regions in low-Earth orbit (LEO). ... James noted that space presents the ultimate edge computing challenge – limited bandwidth, extreme conditions, and no room for failure. “To ensure resilience and autonomy, the platform incorporates automated rollbacks and self-healing capabilities through delta updates and health monitoring,” James said. ... With the Axiom Space deployment, the initial workloads will be small but scalable to the much larger ODC infrastructure that the company plans to deploy in the coming years. “Red Hat Device Edge enables secure, low-latency data processing directly on the ISS, allowing applications to run where the data is being generated,” James said. 


CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at Counterpoint Research, sees both risks and opportunities in the restructuring. “In the near to mid-term, this could weaken the US cybersecurity infrastructure. However, with AI proliferating, the US government likely has a Plan B — potentially shifting toward privatized cybersecurity infrastructure projects, similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these gaps aren’t filled with viable alternatives, vulnerabilities could escalate from small-scale exploits to large-scale cyber incidents at state or federal levels. Signs point to a broader cybersecurity strategy reboot, with funding likely being redirected toward more efficient and sophisticated players rather than a purely vertical, government-led approach.” While some fear heightened risks, others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa, founder and lead analyst at Techarc, views the move as part of a larger digital transformation. “Elon Musk’s role is not just about cost-cutting but also about leveraging technology to create more efficient systems,” Kawoosa said. “DOGE operates as a digital transformation program for US governance, exploring tech-first approaches to achieving similar or better results.”

Daily Tech Digest - January 27, 2025


Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous


Revolutionizing Investigations: The Impact of AI in Digital Forensics

One of the most significant challenges in modern digital forensics, both in the corporate sector and law enforcement, is the abundance of data. Due to increasing digital storage capacities, even mobile devices today can accumulate up to 1TB of information. ... Digital forensics started benefiting from AI features a few years ago. The first major development in this regard was the implementation of neural networks for picture recognition and categorization. This powerful tool has been instrumental for forensic examiners in law enforcement, enabling them to analyze pictures from CCTV and seized devices more efficiently. It significantly accelerated the identification of persons of interest and child abuse victims as well as the detection of case-related content, such as firearms or pornography. ... No matter how advanced, AI operates within the boundaries of its training, which can sometimes be incomplete or imperfect. Large language models, in particular, may produce inaccurate information if their training data lacks sufficient detail on a given topic. As a result, investigations involving AI technologies require human oversight. In DFIR, validating discovered evidence is standard practice. It is common to use multiple digital forensics tools to verify extracted data and manually check critical details in source files. 


Is banning ransomware payments key to fighting cybercrime?

Implementing a payment ban is not without challenges. In the short term, retaliatory attacks are a real possibility as cybercriminals attempt to undermine the policy. However, given the prevalence of targets worldwide, I believe most criminal gangs will simply focus their efforts elsewhere. The government’s resolve would certainly be tested if payment of a ransom was seen as the only way to avoid public health data being leaked, energy networks being crippled, or preventing a CNI organization from going out of business. In such cases, clear guidelines as well as technical and financial support mechanisms for affected organizations are essential. Policy makers must develop playbooks for such scenarios and run education campaigns that raise awareness about the policy’s goals, emphasizing the long-term benefits of standing firm against ransom demands. That said, increased resilience—both technological and organizational—are integral to any strategy. Enhanced cybersecurity measures are critical, in particular a zero trust strategy that reduces an organization’s attack surface and stops hackers from being able to move laterally in the network. The U.S. federal government has already committed to move to zero trust architectures.


Building a Data-Driven Culture: Four Key Elements

Why is building a data-driven culture incredibly hard? Because it calls for a behavioral change across the organization. This work is neither easy nor quick. To better appreciate the scope of this challenge, let’s do a brief thought exercise. Take a moment to reflect on these questions: How involved are your leaders in championing and directly following through on data-driven initiatives? Do you know whether your internal stakeholders are all equipped and empowered to use data for all kinds of decisions, strategic or tactical? Does your work environment make it easy for people to come together, collaborate with data, and support one another when they’re making decisions based on the insights? Does everyone in the organization truly understand the benefits of using data, and are success stories regularly shared internally to inspire people to action? If your answers to these questions are “I’m not sure” or “maybe,” you’re not alone. Most leaders assume in good faith that their organizations are on the right path. But they struggle when asked for concrete examples or data-backed evidence to support these gut-feeling assumptions. The leaders’ dilemma becomes even more clear when you consider that the elements at the core of the four questions above — leadership intervention, data empowerment, collaboration, and value realization — are inherently qualitative. Most organizational metrics or operational KPIs don’t capture them today. 


How CIOs Should Prepare for Product-Led Paradigm Shift

Scaling product centricity in an organization is like walking a tightrope. Leaders must drive change while maintaining smooth operations. This requires forming cross-functional teams, outcome-based evaluation and navigating multiple operating models. As a CIO, balancing change while facing the internal resistance of a risk-averse, siloed business culture can feel like facing a strong wind on a high wire. ...The key to overcoming this is to demonstrate the benefits of a product-centric approach incrementally, proving its value until it becomes the norm. To prevent cultural resistance from derailing your vision for a more agile enterprise, leverage multiple IT operating models with a service or value orientation to meet the ambitious expectations of CEOs and boards. Engage the C-suite by taking a holistic view of how democratized IT can be used to meet stakeholder expectations. Every organization has a business and enterprise operating model to create and deliver value. A business model might focus on manufacturing products that delight customers, requiring the IT operating model to align with enterprise expectations. This alignment involves deciding whether IT will merely provide enabling services or actively partner in delivering external products and services.


CISOs gain greater influence in corporate boardrooms

"As the role of the CISO grows more complex and critical to organisations, CISOs must be able to balance security needs with business goals, culture, and articulate the value of security investments." She highlights the importance of strong relationships across departments and stakeholders in bolstering cybersecurity and privacy programmes. The study further discusses the positive impact of having board members with a cybersecurity background. These members foster stronger relationships with security teams and have more confidence in their organisation's security stance. For instance, boards with a CISO member report higher effectiveness in setting strategic cybersecurity goals and communicating progress, compared to boards without such expertise. CISOs with robust board relationships report improved collaboration with IT operations and engineering, allowing them to explore advanced technologies like generative AI for enhanced threat detection and response. However, gaps persist in priority alignment between CISOs and boards, particularly around emerging technologies, upskilling, and revenue growth. Expectations for CISOs to develop leadership skills add complexity to their role, with many recognising a gap in business acumen, emotional intelligence, and communication. 


Researchers claim Linux kernel tweak could reduce data center energy use by 30%

Researchers at the University of Waterloo's Cheriton School of Computer Science, led by Professor Martin Karsten and including Peter Cai, identified inefficiencies in network traffic processing for communications-heavy server applications. Their solution, which involves rearranging operations within the Linux networking stack, has shown improvements in both performance and energy efficiency. The modification, presented at an industry conference, increases throughput by up to 45 percent in certain situations without compromising tail latency. Professor Karsten likened the improvement to optimizing a manufacturing plant's pipeline, resulting in more efficient use of data center CPU caches. Professor Karsten collaborated with Joe Damato, a distinguished engineer at Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of code. This small but impactful modification has the potential to reduce energy consumption in critical data center operations by as much as 30 percent. Central to this innovation is a feature called IRQ (interrupt request) suspension, which balances CPU power usage with efficient data processing. By reducing unnecessary CPU interruptions during high-traffic periods, the feature enhances network performance while maintaining low latency during quieter times.


GitHub Desktop Vulnerability Risks Credential Leaks via Malicious Remote URLs

While the credential helper is designed to return a message containing the credentials that are separated by the newline control character ("\n"), the research found that GitHub Desktop is susceptible to a case of carriage return ("\r") smuggling whereby injecting the character into a crafted URL can leak the credentials to an attacker-controlled host. "Using a maliciously crafted URL it's possible to cause the credential request coming from Git to be misinterpreted by Github Desktop such that it will send credentials for a different host than the host that Git is currently communicating with thereby allowing for secret exfiltration," GitHub said in an advisory. A similar weakness has also been identified in the Git Credential Manager NuGet package, allowing for credentials to be exposed to an unrelated host. ... "While both enterprise-related variables are not common, the CODESPACES environment variable is always set to true when running on GitHub Codespaces," Ry0taK said. "So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will always leak the access token to the attacker's hosts." ... In response to the disclosures, the credential leakage stemming from carriage return smuggling has been treated by the Git project as a standalone vulnerability (CVE-2024-52006, CVSS score: 2.1) and addressed in version v2.48.1.


The No-Code Dream: How to Build Solutions Your Customer Truly Needs

What's excellent about no-code is that you can build a platform that won't require your customers to be development professionals — but will allow customization. That's the best approach: create a blank canvas for people, and they will take it from there. Whether it's surveys, invoices, employee records, or something completely different, developers have the tools to make it visually appealing to your customers, making it more intuitive for them. I also want to break the myth that no code doesn't allow effective data management. It is possible to create a no-code platform that will empower users to perform complex mathematical operations seamlessly and to support managing interrelated data. This means users' applications will be more robust than their competitors and produce more meaningful insights. ... As a developer, I am passionate about evolving tech and our industry's challenges. I am also highly aware of people's concerns over the security of many no-code solutions. Security is a critical component of any software; no-code solutions are no exception. One-off custom software builds do not typically undergo the same rigorous security testing as widely used commercial software due to the high cost and time involved. This leaves them vulnerable to security breaches.


Digital Operations at Turning Point as Security and Skills Concerns Mount

The development of appropriate skills and capabilities has emerged as a critical challenge, ranking as a pressing concern in advancing digital operations. The talent shortage is most acute in North America and the media industry, where fierce competition for skilled professionals coincides with accelerating digital transformation initiatives. Organizations face a dual challenge: upskilling existing staff while competing for scarce talent in an increasingly competitive market. The report suggests this skills gap could potentially slow the adoption of new technologies and hamper operational advancement if not adequately addressed. "The rapid evolution of how AI is being applied to many parts of jobs to be done is unmatched," Armandpour said. "Raising awareness, educating, and fostering a rich learning environment for all employees is essential." ... "Service outages today can have a much greater impact due to the interdependencies of modern IT architectures, so security is especially critical," Armandpour said. "Organizations need to recognize security as a critical business imperative that helps power operational resilience, customer trust, and competitive advantage." What sets successful organizations apart is the prioritization of defining robust security requirements upfront and incorporating security-by-design into product development cycles. 


Is ChatGPT making us stupid?

In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking.

Daily Tech Digest - January 20, 2025

Robots get their ‘ChatGPT moment’

Nvidia implies that Cosmos will usher in a “ChatGPT moment” for robotics. The company means that, just as the basic technology of neural networks existed for many years, Google’s Transformer model enabled radically accelerated training that led to LLM chatbots like ChatGPT. In the more familiar world of LLMs, we’ve come to understand the relationship between the size of the data sets used for training these models and the speed of that training and their resulting performance and accuracy. ... Driving in the real world with a person as backup is time-consuming, expensive, and sometimes dangerous — especially when you consider that autonomous vehicles need to be trained to respond to dangerous situations. Using Cosmos to train autonomous vehicles would involve the rapid creation of huge numbers of simulated scenarios. For example, imagine the simulation of every kind of animal that could conceivably cross a road — bears, dear, dogs, cats, lizards, etc. — in tens of thousands of different weather and lighting conditions. By the end of all this training, the car’s digital twin in Omniverse would be able to recognize and navigate scenarios of animals on the road regardless of the animal and the weather or time of day. That learning would then be transferred to thousands of real cars, which would also know how to navigate those situations.


How to Use AI in Cyber Deception

Adaptation is one of the most significant ways AI improves honey-potting strategies. Machine learning subsets can evolve alongside bad actors, enabling them to anticipate novel techniques. Conventional signature-based detection methods are less effective because they can only flag known attack patterns. Algorithms, on the other hand, use a behavior-based approach. Synthetic data generation is another one of AI’s strengths. This technology can produce honeytokens — digital artifacts purpose-built for deceiving would-be attackers. For example, it could create bogus credentials and a fake database. Any attempt to use those during login can be categorized as malicious because it means they used illegitimate means to gain access and exfiltrate the imitation data. While algorithms can produce an entirely synthetic dataset, they can also add certain characters or symbols to existing, legitimate information to make its copy more convincing. Depending on the sham credentials’ uniqueness, there’s little to no chance of false positives. Minimizing false positives is essential since most of the tens of thousands of security alerts professionals receive daily are inaccurate. This figure may be even higher for medium- to large-sized enterprises using conventional behavior-based scanners or intrusion detection systems because they’re often inaccurate.


How organizations can secure their AI code

Organizations also expose themselves to risks when developers download machine learning (ML) models or datasets from platforms like Hugging Face. “In spite of security checks on both ends, it may still happen that the model contains a backdoor that becomes active once the model is integrated,” says Alex Ștefănescu, open-source developer at the Organized Crime and Corruption Reporting Project (OCCRP). “This could ultimately lead to data being leaked from the company that used the malicious models.” ... Not all AI-based tools are coming from teams full of software engineers. “We see a lot of adoption being driven by data analysts, marketing teams, researchers, etc. within organizations,” Meyer says. These teams aren’t traditionally developing their own software but are increasingly writing simple tools that adopt AI libraries and models, so they’re often not aware of the risks involved. “This combination of shadow engineering with lower-than-average application security awareness can be a breeding ground for risk,” he adds. ... When it comes to securing enough resources to protect AI systems, some stakeholders might hesitate, viewing it as an optional expense rather than a critical investment. “AI adoption is a divisive topic in many organizations, with some leaders and teams being ‘all-in’ on adoption and some being strongly resistant,” Meyer says. 


AI-driven insights transform security preparedness and recovery

IT security teams everywhere are struggling to meet the scale of actions required to ensure IT operational risk remediation from continually evolving threats. Recovering digital operations after an incident requires a proactive system of IT observability, intelligence, and automation. Organizations should first unify visibility across their IT environments, so they can quickly identify and respond to incidents. Additionally, teams need to eliminate data silos to prevent monitoring overload and resolve issues. ... Unfortunately, many companies still lack the foundational elements needed for successful and secure AI adoption. Common challenges include fragmented or low-quality data disperse in multiple silos, lack of coordination, a shortage of specialized talent like data and AI engineers, and the company own culture resistant to change. Fostering a culture of security awareness starts with making security a visible and integral part of everyday operations. IT leaders should focus on equipping employees with actionable insights through tools that simplify complex security issues. Training programs, tailored to different roles, help ensure that teams understand specific threats relevant to their responsibilities. Providing real-time feedback, such as simulated scenarios, builds practical awareness.


AI Is Quietly Steering Your Decisions - Before You Make Them

Agentic AI here is a critical enabler. These systems analyze user data over various modalities, including text, voice and behavioral patterns to predict intentions and influence outcomes. They are more than a handy assistant helping you cross off a to-do list. OpenAI CEO Sam Altman called these agents "AI's killer function," comparing them to "super competent colleagues that know absolutely everything about my whole life - every email, every conversation I've ever had - but don't feel like an extension." And they are everywhere. Microsoft and Google spearheaded chatbot integration into everyday tools, with Microsoft embedding its Bing Chat and AI assistants into Office software and Google enhancing productivity tools such as Workspace with Gemini capabilities. The study cited the example of Meta, which has claimed to achieve human-level play in the game Diplomacy using their AI agent CICERO. The research team behind CICERO, it says, cautions against "the potential danger for conversational AI agents" that "may learn to nudge its conversational partner to achieve a particular objective." Apple's App Intents framework, it explained, has protocols to "predict actions someone might take in the future" and "to suggest the app intent to someone in the future using predictions you [the developer] provide."


Why digital brands investing in AI to replace humans will fail

Despite its strengths, AI cannot (yet) accurately replicate core human qualities such as emotional intelligence, critical thinking, and nuanced judgment. What it can do is automate time consuming, repetitive operations. Rather than attempting to replace human workers, forward-thinking organisations should encourage the power of human-AI collaboration. By approaching AI this way, brands can respond to customers digital problems faster, meaning employees can use the time gained to direct their efforts to complex problem-solving, strategic planning and customer relations. Those that adopt a hybrid approach, to find the optimal balance between AI and human insight, will be most successful. The collaboration between AI-powered tools and human intelligence creates a powerful combination that can strengthen performance, drive innovation, and help deliver a better overall customer experience. ... On the other hand, businesses that are looking to replace workers, and eventually rely solely on AI-generated operations, risk losing the genuine human touch. This loss of authenticity has the potential to alienate customers, leaving them to feel that their experiences with digital brands are insincere and mechanical. 


From devops to CTO: 5 things to start doing now

If you want to be recognized for promotions and greater responsibilities, the first place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. ... One of the bigger challenges for engineers when taking on larger technical responsibilities is shifting their mindset from getting work done today to deciding what work to prioritize and influencing longer-term implementation decisions. Instead of developing immediate solutions, the path to CTO requires planning architecture, establishing governance, and influencing teams to adopt self-organizing standards. ... “If devops professionals want to be considered for the role of CTO, they need to take the time to master a wide range of skills,” says Alok Uniyal, SVP and head of IT process consulting practice at Infosys. “You cannot become a CTO without understanding areas such as enterprise architecture, core software engineering and operations, fostering tech innovation, the company’s business, and technology’s role in driving business value. Showing leadership that you understand all technology workstreams at a company as well as key tech trends and innovations in the industry is critical for CTO consideration.”


The Human Touch in Tech: Why Local IT Support Remains Essential

While AI can handle common issues, complex or unforeseen problems often require creative solutions and in-depth technical expertise. Call center agents, with limited access to resources — and often operating under strict protocols — may be unable to depart from standardized procedures, even when doing so might be beneficial. The collaborative, adaptable problem-solving approach of a skilled, experienced IT technician is often the key to resolving these intricate challenges. Many IT issues require physical intervention and hands-on troubleshooting. Remote support, though helpful, can't always address hardware problems, network configurations, or security breaches that require on-site assessment and repair. Local IT support companies offering on-site visits have a clear advantage in addressing these types of issues efficiently and effectively. ... Local providers often possess a wide range of skills and experience, allowing them to handle a broader spectrum of issues. Their ability to think creatively and collaboratively enables them to address complex problems that may stump call center agents or AI systems. Furthermore, their local presence allows for swift on-site responses to critical situations.


Six ways to reduce cloud database costs without sacrificing performance

Automate data archiving or deletion for unused or outdated records. Use lifecycle policies to move logs older than specific days to cheaper storage or delete them. TTL (Time to Live) is an easier way to perform such data lifecycle. TTL refers to a setting that defines the lifespan of a piece of data (e.g., a record or document) in the database. After the specified TTL expires, the data is automatically deleted or marked for deletion by the database. ... The advantage of consolidating multiple applications to one single database results in fewer instances, hence reducing costs for compute and storage, enabling efficient resource utilisation when workloads have similar usage patterns. The Implementation can follow schema-based isolation where separate schemas for each tenant can be implemented & row-level isolation where a tenant ID column can be used to segment data within tables One example is to host a SaaS platform for multiple customers on a single database instance with logical partitions. ... Creating copies of specific data items can enhance read performance by reducing costly operations. In an e-commerce store example, you’d typically have separate tables for customers, products, and orders. Retrieving one customer’s order history would involve a query that joins the order table with the customer table and product table.


AI, IoT, and cybersecurity are at the heart of our innovation: Sharat Sinha, Airtel Business

At Airtel Business, we understand that cybersecurity is a growing concern for Indian enterprises. With cyberattacks in India projected to reach one trillion per year by 2033, businesses need robust solutions to safeguard their digital assets. That’s where Airtel Secure Internet and Airtel Secure Digital Internet come in. Airtel Secure Internet, in collaboration with Fortinet, provides comprehensive end-to-end protection by integrating Fortinet’s advanced firewall with Airtel’s high-speed Internet Leased Line (ILL). This solution offers 24/7 monitoring, real-time threat detection, and automated mitigation, all powered by Airtel’s Security Operations Centre (SOC) and Fortinet’s SOAR platform. It ensures businesses are protected from a range of cyberthreats while optimising operational efficiency, without the need for large capital investments in security infrastructure. In addition, Airtel Secure Digital Internet, in partnership with Zscaler, uses Zero Trust Architecture (ZTA) to continuously validate user, device, and network interactions. Combining Zscaler’s cloud security with Security Service Edge (SSE) technology, this solution ensures secure cloud access, SSL inspection, and centralised policy enforcement, helping businesses reduce attack surfaces and simplify security management. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - December 22, 2024

3 Steps To Include AI In Your Future Strategic Plans

AI is complex and multifaceted, so adopting it is not as simple as replacing legacy systems with new technology. Leaders would need to dig deeper to uncover barriers and opportunities. This can involve inviting external experts to discuss AI's benefits and challenges, hosting workshops where team members can explore different case studies, or creating internal discussion groups focused on various aspects of AI technology and potential barriers to adoption. ... A strong strategic plan should clearly link prospective investments to the organization's purpose and mission. For example, if customer centricity is central to the mission, any investment in new technology should directly connect to improving customer outcomes. ... A strategy plan should not only outline planned AI initiatives but also provide a clear roadmap for implementation. Given that AI is still evolving, it's crucial not to create a roadmap in isolation from ever-changing business challenges, market dynamics, or technological advancements. ... In this context, an AI strategy roadmap should be emergent— meaning it should be grounded in key strategic intentions while also being flexible enough to adapt to unforeseen events or black swan occurrences that necessitate rethinking and adjustments.


Can Pure Scrum Actually Work?

“Pure Scrum,” described in the Scrum Guide, is an idiosyncratic framework that helps create customer value in a complex environment. However, five main issues are challenging its general corporate application:Pure Scrum focuses on delivery: How can we avoid running in the wrong direction by building things that do not solve our customers’ problems? Pure Scrum ignores product discovery in particular and product management in general. If you think of the Double Diamond, to use a popular picture, Scrum is focused on the right side; see above. Pure Scrum is designed around one team focused on supporting one product or service. Pure Scrum does not address portfolio management. It is not designed to align and manage multiple product initiatives or projects to achieve strategic business objectives. Pure Scrum is based on far-reaching team autonomy: The Product Owner decides what to build, the Developers decide how to build it, and the Scrum team self-manages. ... At its core, pure Scrum is less a project management framework and more a reflection of an organization’s fundamental approach to creating value. It requires a profound shift from seeing work as a series of prescribed steps to viewing it as a continuous journey of discovery and adaptation. 


The Rise of Agentic AI: How Hyper-Automation is Reshaping Cybersecurity and the Workforce

As AI advances, concerns about job displacement grow louder. For years, organizations have reassured employees that AI will “enhance, not replace” human roles. Smith offered a more nuanced perspective: “AI will replace tasks, not people—at least in the near term. Human oversight remains critical because we still don’t fully understand AI behavior.” In cybersecurity, AI acts as a force multiplier, streamlining tedious tasks like data analysis and incident documentation while enabling humans to focus on strategic decisions. This collaboration allows professionals to do more with less, amplifying productivity without eliminating the need for human expertise. However, Smith acknowledged long-term challenges. ... The rise of agentic AI marks a transformative moment for cybersecurity and the workforce. As organizations move beyond static workflows and embrace dynamic, autonomous systems, they gain the ability to respond to threats faster and more efficiently than ever before. However, this evolution demands a strategic approach—one that balances automation with human oversight, strengthens defenses against AI-driven attacks, and prepares for the societal shifts AI will bring.


If ChatGPT produces AI-generated code for your app, who does it really belong to?

From a contractual point of view, Santalesa contends that most companies producing AI-generated code will, "as with all of their other IP, deem their provided materials -- including AI-generated code -- as their property." OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title, and interest in and to Output." Clearly, though, if you're creating an application that uses code written by an AI, you'll need to carefully investigate who owns (or claims to own) what. For a view of code ownership outside the US, ZDNET turned to Robert Piasentin, a Vancouver-based partner in the Technology Group at McMillan LLP, a Canadian business law firm. He says that ownership, as it pertains to AI-generated works, is still an "unsettled area of the law." ... Piasenten says there may already be some UK case law precedent, based not on AI but on video game litigation. A case before the High Court (roughly analogous to the US Supreme Court) determined that images produced in a video game were the property of the game developer, not the player -- even though the player manipulated the game to produce a unique arrangement of game assets on the screen.


Supply Chain Risk Mitigation Must Be a Priority in 2025

Implementing impactful supply chain protections is far easier said than accomplished, due to the complexity, scale, and integration of modern supply chain ecosystems. While there isn't a silver bullet for eradicating threats entirely, prioritizing a targeted focus on effective supply chain risk management principles in 2025 is a critical place to start. It will require an optimal balance of rigorous supplier validation, purposeful data exposure, and meticulous preparation. ... As supply chain attacks accelerate, organizations must operate under the assumption that a breach isn't just possible — it's probable. An "assumption of breach" mindset shift will help drive more meticulous approaches to preparation via comprehensive supply chain incident response and risk mitigation. Preparation measures should begin with developing and regularly updating agile incident response processes that specifically cater to third-party and supply chain risks. For effectiveness, these processes will need to be well-documented and frequently practiced through realistic simulations and tabletop exercises. Such drills help identify potential gaps in the response strategy and ensure that all team members understand their roles and responsibilities during a crisis.


The End of Bureaucracy — How Leadership Must Evolve in the Age of Artificial Intelligence

AI doesn't just optimize — it transforms. It flattens hierarchies, demands transparency and dismantles traditional power structures. For those managers who thrive on gatekeeping, AI represents a fundamental threat, eliminating barriers they've spent careers building. Consider this: AI thrives on efficiency, speed and clarity. Tasks that once consumed hours of human effort — like vetting vendor contracts or managing customer service inquiries — are now handled instantly by AI systems. Employees can experiment with bold ideas without wading through endless committee approvals. But the true power of AI lies in decentralizing decision-making. By analyzing vast datasets, AI equips frontline employees with actionable insights that previously required executive oversight. This creates organizations that are faster, more agile and less dependent on gatekeepers. ... In an AI-first world, hierarchies will begin to collapse as real-time data eliminates the need for multiple layers of oversight, enabling faster and more efficient decision-making. At the same time, workflows will be reimagined as leaders take on the critical task of redesigning processes to seamlessly integrate AI, ensuring organizations can adapt quickly and effectively.


GAO report says DHS, other agencies need to up their game in AI risk assessment

The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.” ... AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.” Thomas Randall, research lead at Info-Tech Research Group, said, “it is interesting that the DHS had no assessments that evaluated the level of risk for AI use and implementation, but had largely identified mitigation strategies. What this may mean is the DHS is taking a precautionary approach in the time it was given to complete this assessment.” Some risks, he said, “may be identified as significant enough to warrant mitigation regardless of precise quantification of that risk. 


How CI/CD Helps Minimize Technical Debt in Software Projects

One of the foundational principles of CI/CD is the enforcement of automated testing. Automated tests, such as unit tests, integration tests, and end-to-end tests, ensure that code changes do not break existing functionality. By integrating testing into the CI pipeline, developers are alerted to issues immediately after they commit code. ... CI/CD pipelines facilitate incremental and iterative development by encouraging small, frequent code commits. Large, monolithic changes often introduce complexity and technical debt because they are harder to test, debug, and review effectively. ... Technical debt often arises from manual processes that are error-prone and time-consuming. CI/CD eliminates many of these inefficiencies by automating repetitive tasks, such as building, testing, and deploying applications. Automation ensures that these steps are performed consistently and accurately, reducing the risk of human error. ... Code reviews are a critical component of maintaining high-quality software. CI/CD tools enhance the code review process by providing automated feedback on every commit. This feedback loop fosters a culture of accountability and continuous improvement among developers.


Cost-conscious repatriation strategies

First, this is not a pushback on cloud technology as a concept; cloud works and has worked for the past 15 years. This repatriation trend highlights concerns about the unexpectedly high costs of cloud services, especially when enterprises feel they were promised lowered IT expenses during the earlier “cloud-only” revolutions. Leaders must adopt a more strategic perspective on their cloud architecture. It’s no longer just about lifting and shifting workloads into the cloud; it’s about effectively tailoring applications to leverage cloud-native capabilities—a lesson GEICO learned too late. A holistic approach to data management and technology strategies that aligns with an organization’s unique needs is the path to success and lower bills. Organizations are now exploring hybrid environments that blend public cloud capabilities with private infrastructure. A dual approach, which is nothing new, allows for greater data control, reduced storage and processing costs, and improved service reliability. Weekly noted that there are ways to manage capital expenditures in an operational expense model through on-premises solutions. On-prem systems tend to be more predictable and cost-effective over time.


Cyber Resilience: Adapting to Threats in the Cloud Era

Use cloud-native security solutions that offer automated threat detection, incident response, and monitoring. These technologies ought to be flexible enough to adjust to changes in the cloud environment and defend against new risks as they arise. ... Effective cyber resilience plans enable businesses to recover quickly from emergencies by reducing downtime and maintaining continuous service delivery. Businesses that put flexibility first can manage emergencies with few problems, which helps them keep the confidence and trust of their clients. Cyber resilience strongly emphasizes flexibility, enabling companies to address new risks in the ever-evolving digital environment. Businesses can lower financial losses and safeguard their reputation by concentrating on data protection and breach remediation. Finding and fixing common setup mistakes in cloud systems that could lead to security issues and data breaches requires using Cloud Security Posture Management (CSPM) tools. ... Because criminals frequently use these configuration errors to cause data breaches and security errors, it is essential to identify them. Organizations may monitor their cloud environments and ensure that settings follow security best practices and regulations by using CSPM solutions. 



Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett