Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller

Daily Tech Digest - May 09, 2024

Red Hat delivers accessible, open source Generative AI innovation with Red Hat Enterprise Linux AI

RHEL AI builds on this open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to simplify deployment across a hybrid infrastructure environment. This creates a foundation model platform for bringing open source-licensed GenAI models into the enterprise. RHEL AI includes:Open source-licensed Granite language and code models that are supported and indemnified by Red Hat. A supported, lifecycled distribution of InstructLab that provides a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much wider range of users. Optimised bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, including optimised Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel and NVIDIA GPUs and NeMo frameworks. 


Regulators are coming for IoT device security

Up to now, the IoT industry has relied mainly on security by obscurity and the results have been predictable: one embarrassing compromise after another. IoT devices find themselves recruited into botnets, connected locks get trivially unlocked, and cars can get remotely shut down while barreling down the highway at 70mph. Even Apple, who may have the most sophisticated hardware security team on the planet, has faced some truly terrible security vulnerabilities. Regulators have taken note, and they are taking action. In September 2022, NIST fired a warning shot by publishing a technical report that surveyed the state of IoT security and made a series of recommendations. This was followed by a voluntary regulatory scheme—the Cyber Trust Mark, published by the FCC in the US—as well as a draft regulation of European Union’s upcoming Cyber Resilience Act (CRA). Set to begin rolling out in 2025, the CRA will create new cybersecurity requirements to sell a device in the single market. Standard bodies have not stayed idle.The Connectivity Standards Alliance published the IoT Device Security Specification in March of this year, after more than a year of work by its Product Security Working Group.


Australia revolutionises data management challenges

In Australia, the importance of data literacy is growing rapidly. It is now more essential than ever to be able to comprehend and effectively communicate data as valuable information. The significance of data literacy cannot be overemphasised. Highlighting the importance of data literacy across government agencies is key to unlocking the true power of data. Understanding which data to use for problem-solving, employing critical thinking to comprehend and tackle data strengths and limitations, strategically utilising data to shape policies and implement effective programmes, regulations, and services, and leveraging data to craft a captivating narrative are all essential components of this process. Nevertheless, the ongoing challenge lies in ensuring that employees have the ability to interpret and utilise data effectively. Individuals who are inexperienced with data may find it challenging to effectively work with data, comprehend intricate datasets, analyse patterns, and extract valuable insights. Organisations are placing a strong emphasis on data literacy initiatives, aiming to turn individuals with limited data knowledge into experts in the field. 


Navigating Architectural Change: Overcoming Drift and Erosion in Software Systems

Architectural drift involves the introduction of design decisions that were not part of the original architectural plan, yet these decisions do not necessarily contravene the foundational architecture. In contrast, architectural erosion occurs when new design considerations are introduced that directly conflict with or undermine the system's intended architecture, effectively violating its guiding principles. ... In software engineering terms, a system may start with a clean architecture but, due to architectural drift, evolve into a complex tangle of multiple architectural paradigms, inconsistent coding practices, redundant components, and dependencies. On the other hand, architectural erosion could be likened to making alterations or additions that compromise the structural integrity of the house. For instance, deciding to remove or alter key structural elements, such as knocking down a load-bearing wall to create an open-plan layout without proper support, or adding an extra floor without considering the load-bearing capacity of the original walls.


Strong CIO-CISO relations fuel success at Ally

We identify the value we are creating and capturing before we kick off a technology project, and it’s a joint conversation with the business. I don’t think it’s just the business responsibility to say my customer acquisition is going to go up, or my revenue is going to go up by X. There is a technology component to it, which is extremely critical, especially as a full-scale digital-only organization. What does it take for you to build the capability? How long will it take? How much does it cost and what does it cost to run it? ... Building a strong leadership team is critical. Empowering them is even more critical. When people talk about empowerment, they think it means I leave my leaders alone and they go do whatever they want. It’s actually the opposite. We have sensitive and conflict-filled conversations, and the intent of that is to make each other better. If I don’t understand how my leaders are executing, I won’t be able to connect the dots. It is not questioning what they’re doing; it’s asking questions for my learning so I can connect and share learnings from what other leaders are doing. That’s what leads us to preserving that culture.


To defend against disruption, build a thriving workforce

To build a thriving workplace, leaders must reimagine work, the workplace, and the worker. That means shifting away from viewing employees as cogs who hit their deliverables then turn back into real human beings after the day is done. Employees are now more like elite artists or athletes who are inspired to produce at the highest levels but need adequate time to recharge and recover. The outcome is exceptional; the path to getting there is unique. ... Thriving is more than being happy at work or the opposite of being burned out. Rather, one of the cornerstones of thriving is the idea of positive functioning: a holistic way of being, in which people find a purposeful equilibrium between their physical, mental, social, and spiritual health. Thriving is a state that applies across talent categories, from educators and healthcare specialists to data engineers and retail associates. ... In this workplace, people at every level are capable of being potential thought leaders who have influence through the right training, support, and guidance. They don’t have to be just “doers” who simply implement what others tell them to. 


Tips for Controlling the Costs of Security Tools

The total amount that a business spends on security tools can vary widely depending on factors like which types of tools it deploys, the number of users or systems the tools support and the pricing plans of tool vendors. But on the whole, it’s fair to say that tool expenditures are a significant component of most business budgets. Moody’s found, for example, that companies devote about 8% of their total budget to security. That figure includes personnel costs as well as tool costs, but it provides a sense of just how high security spending tends to be relative to overall business expenses. These costs are likely only to grow. IDC believes that total security budgets will increase by more than a third over the next few years, due in part to rising tool costs. This means that finding ways to rein in spending on security tools is important not just for reducing overall costs today, but also preventing cost overruns in the future. Of course, reducing spending can’t amount simply to abandoning critical tools or turning off important features.


UK Regulator Tells Platforms to 'Tame Toxic Algorithms'

The Office of Communications, better known as Ofcom, on Wednesday urged online intermediaries, which include end-to-end encrypted platforms such as WhatsApp, to "tame toxic algorithms." Ensuring recommender systems "do not operate to harm children" is a measure the regulator made in a draft proposal for regulations enacting the Online Safety Act, legislation the Conservative government approved in 2023 that is intended to limit children's exposure to damaging online content. The law empowers the regulator to order online intermediaries to identify and restrict pornographic or self-harm content. It also imposes criminal prosecution for those whose send harmful or threatening communications. Instagram, YouTube, Google and Facebook that are among 100,000 web services that come under the scope of the regulation and are likely to be affected by the new requirements. "Any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children's feeds and reduce the visibility of other harmful content," Ofcom said.


Businesses lack AI strategy despite employee interest — Microsoft survey

“While leaders agree using AI is a business imperative, and many say they won’t even hire someone without AI skills, they also believe that their companies lack a vision and plan to implement AI broadly; they’re stuck in AI inertia,” Colette Stallbaumer, general manager of Copilot and Cofounder of Work Lab at Microsoft, said in a pre-recorded briefing. “We’ve come to the hard part of any tech disruption, moving from experimentation to business transformation,” Stallbaumer said. While there’s clear interest in AI’s potential, many businesses are proceeding with caution with major deployments, say analysts. “Most organizations are interested in testing and deployment, but they are unsure where and how to get the most return,” said Carolina Milanesi, president and principal analyst at Creative Strategies. Security is among the biggest concerns, said Milanesi, “and until that is figured out, it is easier for organizations to shut access down.” As companies start to deploy AI, IT teams face significant demands, said Josh Bersin, founder and CEO of The Josh Bersin Company. 


Mayorkas, Easterly at RSAC Talk AI, Security, and Digital Defense

While acknowledging the increasingly ubiquitous use of AI in many services across the nation, Mayorkas commented about the advisory board’s conversation of leveraging that technology in cybersecurity. “It’s a very interesting discussion on what the definition of ‘safe’ is,” he said. “For example, most people now when they speak of the civil rights, civil liberties implications, categorize that under the responsible use of AI, but what we heard yesterday was an articulation of the fact that the civil liberties, civil rights implications of AI really are part and parcel of safety.” ... Technologies are shipped in ways that create risk, vulnerabilities, and they are configured and deployed in ways that are incredibly complex. “It’s eerily reminiscent of William Gibson's ‘Neuromancer,’” Krebs said. “When he talks about cyberspace, he said ‘the unthinkable complexity,’ and that’s what it's like right now to deploy and manage a large enterprise.” “We are just not sitting in place or standing in place because new technology for emerging on a regular basis,” he said. 



Quote for the day:

"Successful people do what unsuccessful people are not willing to do. Don't wish it were easier; wish you were better." -- Jim Rohn

Daily Tech Digest - January 26, 2024

Why a Chief Cyber Resilience Officer is Essential in 2024

“We'll see the role popping up more and more as an operational outcome within security programs and more of a focus in business. In the wake of the pandemic and macroeconomic conditions and everything, what business leader isn’t thinking about business resilience? So, cyber resilience tucks nicely into that.” On the surface, the standalone CISO role isn’t much different because it serves as the linchpin for securing the enterprise. There are many different flavors of CISO, with some being business-focused, says Hopkins, whose teams take on more compliance tasks as opposed to more technical security operations. Other CISOs are more technical, meaning they’ll monitor threats in the environment and respond accordingly, while compliance is a separate function. However, the stark differences between the two roles lie in the mindset, approach, and target outcome for the scenario. The CCRO’s mindset is “it’s not a matter of if, but when.” So, the CCRO’s approach is to anticipate cyber incidents and make incident response preparations that will mitigate material damage to a business. They act as a lifeline. This approach is arguably the role’s most quintessential attribute. 


How To Sell Enterprise Architecture To The Business

The best way to win buy-in for your enterprise architecture (EA) practice is to know who your stakeholders are and which of them will be the most receptive to your ideas. EA has a broad scope that impacts your entire business strategy beyond just your application portfolio, so you need to adapt your presentations to your audience. Defining the specific parts of your EA practice that matter to each stakeholder will keep your discussion relevant and impactful. Put your processes in the context of the stakeholder's business area and show the immediate value you will create and the structure that you have in place to do so. You can even offer to help install EA processes into other teams' workflows to help improve synergy with their toolsets. Just ensure that you highlight the benefits for them. Explaining to your marketing team how you plan to optimize your organization's finance software is not going to engage them. However, showcasing the information you have on your content management systems and MQL trackers will catch their interest. Once a group of key stakeholders are on-board with your EA practice, you will have a group of EA evangelists and a selection of case studies that you can use to win over more and more stakeholders. 


Quantum Breakthrough: Unveiling the Mysteries of Electron Tunneling

Tunneling is a fundamental process in quantum mechanics, involving the ability of a wave packet to cross an energy barrier that would be impossible to overcome by classical means. At the atomic level, this tunneling phenomenon significantly influences molecular biology. It aids in speeding up enzyme reactions, causes spontaneous DNA mutations, and initiates the sequences of events that lead to the sense of smell. Photoelectron tunneling is a key process in light-induced chemical reactions, charge and energy transfer, and radiation emission. The size of optoelectronic chips and other devices has been close to the sub-nanometer atomic scale, and the quantum tunneling effects between different channels would be significantly enhanced. ... This work successfully reveals the critical role of neighboring atoms in electron tunneling in sub-nanometer complex systems. This discovery provides a new way to deeply understand the key role of the Coulomb effect under the potential barrier in the electron tunneling dynamics, solid high harmonics generation, and lays a solid research foundation for probing and controlling the tunneling dynamics of complex biomolecules.


UK Intelligence Fears AI Will Fuel Ransomware, Exacerbate Cybercrime

“AI will primarily offer threat actors capability uplift in social engineering,” the NCSC said. “Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.” The other worry deals with hackers using today’s AI models to quickly sift through the gigabytes or even terabytes of data they loot from a target. For a human it could take weeks to analyze the information, but an Al model could be programmed to quickly pluck out important details within minutes to help hackers launch new attacks or schemes against victims. ... Despite the potential risks, the NCSC's report did find one positive: “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design.” So it’s possible the cybersecurity industry could develop AI smart enough to counter next-generation attacks. But time will tell. Meanwhile, other cybersecurity firms including Kaspersky say they've also spotted cybercriminals "exploring" using AI programs. 


Machine learning for Java developers: Algorithms for machine learning

In supervised learning, a machine learning algorithm is trained to correctly respond to questions related to feature vectors. To train an algorithm, the machine is fed a set of feature vectors and an associated label. Labels are typically provided by a human annotator and represent the right answer to a given question. The learning algorithm analyzes feature vectors and their correct labels to find internal structures and relationships between them. Thus, the machine learns to correctly respond to queries. ... In unsupervised learning, the algorithm is programmed to predict answers without human labeling, or even questions. Rather than predetermine labels or what the results should be, unsupervised learning harnesses massive data sets and processing power to discover previously unknown correlations. In consumer product marketing, for instance, unsupervised learning could be used to identify hidden relationships or consumer grouping, eventually leading to new or improved marketing strategies. ... The challenge of machine learning is to define a target function that will work as accurately as possible for unknown, unseen data instances. 


How to protect your data privacy: A digital media expert provides steps you can take and explains why you can’t go it alone

The dangers you face online take very different forms, and they require different kinds of responses. The kind of threat you hear about most in the news is the straightforwardly criminal sort of hackers and scammers. The perpetrators typically want to steal victims’ identities or money, or both. These attacks take advantage of varying legal and cultural norms around the world. Businesses and governments often offer to defend people from these kinds of threats, without mentioning that they can pose threats of their own. A second kind of threat comes from businesses that lurk in the cracks of the online economy. Lax protections allow them to scoop up vast quantities of data about people and sell it to abusive advertisers, police forces and others willing to pay. Private data brokers most people have never heard of gather data from apps, transactions and more, and they sell what they learn about you without needing your approval. A third kind of threat comes from established institutions themselves, such as the large tech companies and government agencies. These institutions promise a kind of safety if people trust them – protection from everyone but themselves, as they liberally collect your data.


Pwn2Own 2024: Tesla Hacks, Dozens of Zero-Days in Electrical Vehicles

"The attack surface of the car it's growing, and it's getting more and more interesting, because manufacturers are adding wireless connectivities, and applications that allow you to access the car remotely over the Internet," Feil says. Ken Tindell, chief technology officer of Canis Automotive Labs, seconds the point. "What is really interesting is how so much reuse of mainstream computing in cars brings along all the security problems of mainstream computing into cars." "Cars have had this two worlds thing for at least 20 years," he explains. First, "you've got mainstream computing (done not very well) in the infotainment system. We've had this in cars for a while, and it's been the source of a huge number of vulnerabilities — in Bluetooth, Wi-Fi, and so on. And then you've got the control electronics, and the two are very separate domains. Of course, you get problems when that infotainment then starts to touch the CAN bus that's talking to the brakes, headlights, and stuff like that." It's a conundrum that should be familiar to OT practitioners: managing IT equipment alongside safety-critical machinery, in such a way that the two can work together without spreading the former's nuisances to the latter. 


Does AI give InfiniBand a moment to shine? Or will Ethernet hold the line?

Ethernet’s strengths include its openness and its ability to do a more than decent job for most workloads, a factor appreciated by cloud providers and hyperscalers who either don't want to manage a dual-stack network or become dependent on the small pool of InfiniBand vendors. Nvidia's SpectrumX portfolio uses a combination of Nvidia's 51.2 Tb/s Spectrum-4 Ethernet switches and BlueField-3 SuperNICs to provide InfiniBand-like network performance, reliability, and latencies using 400 Gb/s RDMA over converged Ethernet (ROCE). Broadcom has made similar claims across its Tomahawk and Jericho switch line, which use either data processing units to manage congestion or handling this in the top of rack switch with its Jericho3-AI platform, announced last year. To Broadcom's point, hyperscalers and cloud providers such like AWS have done just that, Boujelbene said. The analyst noted that what Nvidia has done with SpectrumX is compress this work into a platform that makes it easier to achieve low-loss Ethernet. And while Microsoft has favored InfiniBand for its AI cloud infrastructure, AWS is taking advantage of improving congestion management techniques in its own Elastic Fabric Adapter 2 (EFA2) network


The Evolution & Outlook of the Chief Information Security Officer

Beyond mere implementation, the CISO also carries the mantle of education, nurturing a cybersecurity-conscious environment by making every employee cognizant of potential cyber threats and effective preventive measures. As the digital landscape shifts beneath our feet, the roles and responsibilities of the CISO have significantly evolved, casting a larger shadow over the organization’s operations and extending far beyond the traditional confines of IT risk management. No longer confined to the realms of technology alone, the CISO has become an integral component of the broader business matrix. They stand at the intersection of business and technology, needing to balance the demands of both spheres in order to effectively steer the organization towards a secure digital future. ... The increasingly digitalized and interconnected world of today has thrust the role of the Chief Information Security Officer (CISO) into the limelight. Their duties have become crucial as organizations navigate a complex and ever-evolving cybersecurity landscape. Customer data protection, adherence to intricate regulations, and ensuring seamless business operations in the face of potential cyber threats are prime priorities that necessitate the presence of a CISO. 


To Address Security Data Challenges, Decouple Your Data

Why is this a good thing? It can ultimately help you gain a holistic perspective of all the security tools you have in your organization to ensure you’re leveraging the intrinsic value of each one. Most organizations have dozens of security tools, if not more, but most lack a solid understanding or mapping of what data should go into the SIEM solution, what should come out, and what data is used for security analytics, compliance, or reporting. As data becomes more complex, extracting value and aggregating insights become more difficult. When you decide to decouple the data from the SIEM system, you have an opportunity to evaluate your data. As you move towards an integrated data layer where disparate data is consolidated, you can clean, deduplicate, and enrich it. Then you have the chance to merge that data not only with other security data but with enterprise IT and business data, too. Decoupling the data into a layer where disparate data is woven together and normalized for multidomain data use cases allows your organization to easily take HR data, organizational data, and business logic and transform it all into ready-to-use business data where security is a use case. 



Quote for the day:

“If my mind can conceive it, my heart can believe it, I know I can achieve it!” -- Jesse Jackson

Daily Tech Digest - December 23, 2023

How LLMs made their way into the modern data stack in 2023

Beyond helping teams generate insights and answers from their data through text inputs, LLMs are also handling traditionally manual data management and the data efforts crucial to building a robust AI product. In May, Intelligent Data Management Cloud (IDMC) provider Informatica debuted Claire GPT, a multi-LLM-based conversational AI tool that allows users to discover, interact with and manage their IDMC data assets with natural language inputs. It handles multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation. Then, to help teams build AI offerings, California-based Refuel AI provides a purpose-built large language model that helps with data labeling and enrichment tasks. A paper published in October 2023 also shows that LLMs can do a good job at removing noise from datasets, which is also a crucial step in building robust AI. Other areas in data engineering where LLMs can come into play are data integration and orchestration. 


Corporate governance in 2023: a year in review

2023 has seen a continuing trend of more responsibilities for directors. Often, this responsibility comes from regulators; sometimes, it comes from investors or other stakeholders. One thing is certain, though: directors are rapidly losing any remaining wiggle room to be “rubber-stamp” individuals. Modern board roles carry serious accountability; many directors are starting to appreciate that and adhere to new standards. The trouble is sometimes the new standard overstretch the director – so much so that we now have concerns about overboarding, exhaustion, and undue stress. How will that play out if the trend of more responsibility continues? ... The board dismissed the evidently popular CEO Sam Altman in a decision made behind closed doors with utmost secrecy. And as the world’s attention predictably turned their way, they could give no answers. Soon, Altman was rehired after around 70% of the company’s staff threatened to resign and join Microsoft (a significant OpenAI investor). The board subsequently agreed to undergo a major reshuffle for more accountability and transparent decision-making.


Quantum Computing’s Hard, Cold Reality Check

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACM suggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe. “We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.” The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science. Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. 


Navigating the Data Landscape: The Crucial Role of Data Governance in Today’s Business Environment

Data quality management has become increasingly paramount as the volume of data exponentially raises day by day. Organizations can protect their data with policies and procedures, ensure that they follow all the rules and regulations, hire folks that understand the data you are collecting and what it means to their company but if that data isn’t high quality your organization may get the short end of the stick. Maybe you’re three weeks late for a TikTok trend or you miss out on a whole subset of customers because of the misstep with your collection methods, either way that profit loss and a chance to build on that data point in the future could be a pivotal misstep. Ensuring that your organization has processes to monitor and improve your data quality on a continuous basis will save your organization time and money in the long run. Despite its importance, implementing effective data governance comes with challenges. Organizations often face resistance to change, cultural barriers, and the complexity of managing diverse data sources.


Choosing Between Message Queues and Event Streams

There are numerous distinctions between technologies that allow you to implement event streaming and those that you can use for message queueing. To highlight them, I will compare Apache Kafka and RabbitMQ. I’ve chosen Kafka and RabbitMQ specifically because they are popular, widely used solutions providing rich capabilities that have been extensively battle-tested in production environments. ... Message queueing and event streaming can both be used in scenarios requiring decoupled, asynchronous communication between different parts of a system. For instance, in microservices architectures, both can power low-latency messaging between various components. However, going beyond messaging, event streaming and message queueing have distinct strengths and are best suited to different use cases. ... Message queueing is a good choice for many messaging use cases. It’s also an appealing proposition if you’re early in your event-driven journey; that’s because message queueing technologies are generally easier to deploy and manage than event streaming solutions. 


5G and edge computing: What they are and why you should care

Instead of relying solely on large, high-powered cell towers (as 4G does), 5G will run off both those towers and a ton of small cell sites that can be clustered together. This is how 5G achieves its population density. 5G is also supposed to be more energy efficient. As such, the communications component of IoT devices won't drain as much power, resulting in longer battery life for connected devices. There's also a ton of AI and machine learning in 5G implementations. 5G nodes and interface devices deployed on the edge, away from central hubs. They utilize AI and machine learning to analyze communications performance, and use AI to bandwidth-shape communications, to wring as much performance out of the hardware as possible. You're familiar with the term "cloud computing." We've all used cloud services, services that run on a server someplace rather than on our desktop computers or mobile devices. The cloud, of course, isn't really a cloud. Amazon, Google, Facebook, Microsoft, and others operate massive data centers packed with thousands upon thousands of servers. Soft and fluffy, the cloud is not.


Stolen Booking.com Credentials Fuel Social Engineering Scams

Social engineering expert Sharon Conheady said this type of trickery remains extremely difficult to repel, because of the customer-first nature of hospitality. Many public-facing people in such organizations, such as receptionists, are "trained to help people - that's their job," and of course they're going to bend over backwards to try to meet apparent customers' demands, Conheady said in an interview at this month's Black Hat Europe conference in London. Help desks remain another frequent target. "I had a client lately who asked me to call the help desk and obtain BitLocker keys," she said, referring to a recent penetration test. "Every single one of the help desk agents gave us the BitLocker key." That prompted her to ask: Do these personnel even know what a BitLocker key is, and why they shouldn't share it? The client said they didn't know. While training people in customer-facing roles can help, Conheady said the only truly effective approach would be to put in place strong technical controls to outright prevent and block such attacks.


Significantly Improving Security Posture: A CMMI Case Study

“Phoenix Defense has led the way in adopting CMMI Security best practices for nearly two decades, and now included the Security best practices,” says Kris Puthucode, Certified CMMI High Maturity Lead Appraiser at Software Quality Center LLC. “This adoption has yielded quantifiable benefits, enhancing security posture across Mission, Personnel, Physical, Process, and Cybersecurity domains. Additionally, incorporating Virtual work best practices has standardized virtual meetings and events, boosting efficiency.” Phoenix Defense has been a CMMI Performance Solutions Organization since 2005, first achieving Maturity Level 5 in 2020. ... Before adopting CMMI Security and Managing Security Threats and Vulnerabilities Practice Areas in the model, Phoenix Defense had a closed network with no outward-facing applications and relied on a third-party vendor to monitor threats and spam. They did not fully, quantitively track attacks against the networks or other data flows, and they required a more robust approach to properly ensure network security.


5 common data security pitfalls — and how to avoid them

While regulations like GDPR and SOX set standards for data security, they are merely starting points and should be considered table stakes for protecting data. Compliance should not be mistaken for complete data security, as robust security involves going beyond compliance checks. The fact is that many large data breaches have occurred in organizations that were fully compliant on paper. Moving beyond compliance requires actively identifying and mitigating risks rather than just ticking boxes during audits. ... Data is one of the most valuable assets for any organization. And yet, the question, “Who owns the data?” often leads to ambiguity within organizations. Clear delineation of data ownership and responsibility is crucial for effective data governance. Each team or employee must understand their role in protecting data to create a culture of security. ... Unpatched vulnerabilities are one of the easiest targets for cyber criminals. This means that organizations face significant risks when they can’t address public vulnerabilities quickly. Despite the availability of patches, many enterprises delay deployment for various reasons, which leaves sensitive data vulnerable.


Outmaneuvering AI: Cultivating Skills That Make Algorithms Scratch Their Head

Reasoning, the intellectual ninja of skills, is all about slicing through misinformation, assumptions, and biases to get to the heart of the matter. It’s not just drawing conclusions, but thinking about how we do that. This skill is the brain’s bouncer, keeping cognitive fallacies and hasty generalizations at bay. We humans, bless our hearts, are prone to jumping on the bandwagon or seeing patterns where there are none (like seeing a face on Mars or believing in hot streaks at Vegas). These mental shortcuts, or heuristics, can lead us astray, making reasoning not just useful but essential. AI is trained on our past reasoning reflected in old works. But it can’t reason on its own — at least not yet. Consider a business deciding whether to invest in a new technology. Without proper reasoning, they might follow the hype (everyone else is doing it!) or rely on gut feelings (it just feels right!). But with reasoning, they dissect the decision, weigh the evidence, consider alternatives, and make a choice that’s not just good on paper, but good in reality.



Quote for the day:

"Whether you think you can or you think you can’t, you’re right." -- Henry Ford

Daily Tech Digest - November 14, 2023

Balancing act: CISOs knife-edge role in modern cybersecurity

Enhanced personal liability and duty of care are becoming increasingly unavoidable for many industries under the NIS2 (Network and Information Systems Directive) - a directive to set higher standards for cybersecurity across the European Union - and DORA (Digital Operational Resilience Act). This change is unnerving for CISOs as their role is officially recognized by regulators, shareholders, and customers. 62% cited concerns about personal liability in a recent global survey by Proofpoint, demonstrating the increased pressures of the role. ... Cybercriminals are already experienced users of AI, with ransomware producers incorporating AI and machine learning techniques into their malware while using it to target specific victims and evade antivirus software detection. Such use of advanced technology is expected to continue as ransomware developers become more proficient in their tactics and multiply the challenges CISOs will face. While AI can automate threat detection and response, it requires an understanding of past threat activity. 


Exploring the Role of Consensus Algorithms in Distributed System Design

Consensus, in the context of distributed systems, is the act of getting a group of nodes to agree on a single value or outcome, even if failures and network delays occur. This agreement is vital for the proper functioning of distributed systems, for it ensures that all nodes operate cohesively and consistently, even when they are geographically dispersed. ... At the heart of many consensus algorithms is the concept of Leader election, as it establishes a single node responsible for coordinating and making decisions on behalf of the group. In other words, this leader ensures that all nodes in the system agree on a common value or decision, promoting order and preventing conflicts in distributed environments. Fault tolerance is a critical aspect of consensus algorithms as well, as it allows systems to continue functioning even in the presence of node failures, network partitions, or other unforeseen issues. Consistency, reliability, and fault tolerance are among the primary guarantees offered. 


Rogue state-aligned actors are most critical cyber threat to UK

These groups have become emboldened to act with impunity regardless of whether or not they have Russia’s official backing, and the NCSC said it had “concerns” that these groups have a higher risk appetite than those advanced persistent threat (APT) actors – such as Sandworm – that operate as units of the Russian intelligence and military services. This makes them a far more dangerous threat because they may seek to attack CNI operators without constraint and without being able to fully understand, or control, the impact of their actions. The consequences of this could be exceptionally severe. At the same time, Russian APTs continue to advance their goal of weakening and dividing Moscow’s adversaries by interfering in the democratic process using mis- and disinformation and cyber attacks. ... Of particular concern next go round will be large language models (LLM), which will almost certainly be used to generate fabricated content and deepfakes before the election, and a developing trend of targeting the email accounts of prominent individuals, as previously reported.


Fostering an automation-driven operations mindset in enterprises

By embracing automation, companies are changing the way they operate. This can mean rethinking their entire business model to become more profitable and competitive. However, this change is not always easy. Businesses face various challenges, such as dealing with disruptions in the market, figuring out the right number of employees needed for their operations, and keeping up with the ever-changing market conditions. Businesses are recognising that in order to stay relevant and successful, they need to undergo a digital transformation. This means adopting new technologies and ways of doing things to achieve significant positive changes in their operations. Automation has the power to create these changes across all types of industries, including retail, logistics, manufacturing, and the BFSI sector. ... This shift is so significant that the market for industrial automation in India is expected to double from USD 13.23 billion in 2023 to USD 25.76 billion by 2028. This is a clear indication that companies are investing heavily in automation to ensure they remain competitive and up to date with the latest advancements.


MongoDB vs. ScyllaDB: A Comparison of Database Architectures

The MongoDB architecture enables high availability through the concept of replica sets. MongoDB replica sets follow the concept of primary-secondary nodes, where only the primary handles the write operations. The secondaries hold a copy of the data and can be enabled to handle read operations only. A common replica set deployment consists of two secondaries, but additional secondaries can be added to increase availability or to scale read-heavy workloads. MongoDB supports up to 50 secondaries within one replica set. Secondaries will be elected as primary in case of a failure at the former primary. ... Unlike MongoDB, ScyllaDB does not follow the classical relational database management system (RDBMS) architectures with one primary node and multiple secondary nodes, but uses a decentralized structure, where all data is systematically distributed and replicated across multiple nodes forming a cluster. This architecture is commonly referred to as multiprimary architecture. A cluster is a collection of interconnected nodes organized into a virtual ring architecture, across which data is distributed. 


Relationship management: The unsung art of optimizing IT teams

Getting the most out of IT staff and unleashing synergies among IT teams is among the more underappreciated skills an IT leader must have to optimize their organization’s efforts. And for that you must develop an uncanny knack for relationship management and an understanding of how differing personalities can enforce and work with one another to great effect. After all, IT brings together a diverse range of personalities, from statisticians, mathematicians, and developers who are rooted in the rigors of computer science, to liberal arts majors who might just as soon be writing a novel if it could pay the bills. So, how do you as an IT leader unify these wide-ranging personalities into a cohesive project team? The short answer is that you don’t try to change anyone. Instead, you seize on common goals most team members have: To see success, feel good about the work they do, and contribute in ways that play to their strengths — while avoiding what they find off-putting or unproductive.


As perimeter defenses fall, the identify-first approach steps into the breach

An identity-first strategy is all about knowing the identity of all humans and non-humans accessing points within the enterprise. In other words, the strategy calls for the organization to know each employee, contractor, and business partner as well as endpoint, server, or application that seeks to connect. It is often also called identity-centric or identity-first security. It's foundational to implementing zero trust because zero trust says trust no entity until that entity — whether human or machine — can authenticate that it is who it says it is and can verify it has been authorized to access the network, application, API, server, etc. that it's seeking to access. ... As Avijit explains, no single solution delivers an identity-first strategy. Rather, it requires a synthesis of policies, practices and technology — like nearly everything else in cybersecurity. Those elements must come together to achieve three key objectives, says Henrique Teixeria, senior director analyst at Gartner, a research and advisory firm. 


Collaborative strategies are key to enhanced ICS security

Cooperation between IT (information technology) and OT (operational technology) departments is extremely important to address unique security challenges in industrial sectors. The IT department is usually responsible for managing computer systems, networks, and data, while the OT department manages operating systems, industrial control systems, and sensors. Synergy between these departments allows for a better understanding and confrontation of threats involving industrial control systems. IT teams have expertise in information security, and OT teams have years of experience working with industrial systems. By combining the knowledge of both departments, one can proactively identify and address security vulnerabilities and threats. The advantages of training these departments with each other are many. First, understanding both aspects – INFORMATION and industrial technology – allows for more effective identification and analysis of security challenges that are specific to the industrial sectors. 


3 cybersecurity compliance challenges and how to address them

Changes in regulations can be as rapid as the introduction of new products or the emergence of new threats and attacks. Thus, organisations need to be agile enough to keep up with regulatory changes. Unfortunately, not many of us have the ability to do this on our own. Cybersecurity skills shortage continues to be a problem when it comes to compliance. Many organisations lack the right people to properly address cyber threats, let alone continuously monitor regulatory changes. The challenge of keeping up with changing regulations can be addressed with the help of resources that track updates for you. Often, these are related to specific business niches. For companies involved in credit and financial service operations, for example, the cybersecurity alerts of the National Association of State Credit Union Supervisors (NASCUS) provide up-to-date information on the latest regulations that affect those in the business of extending credit and other financial services. There are also regulation monitoring subscription services that provide updates on regulations in general. 


Ethical Considerations in AI and Cloud Computing: Ensuring Responsible Develop and Use

Transparency and ethics go hand in hand. With AI, transparency is an essential ethical practice that plays a role in meaningful consent, accountability, and algorithmic auditing. Transparency is essential for driving public acceptance and trust in AI. AI has been accused of having a “black box” problem, referring to the lack of transparency in how it operates and the logic behind its decisions. The use of complex algorithms and proprietary systems contributes to the problem. Ethical practices must address the black box issue by ensuring a high level of transparency in AI development and deployment. ... Assigning responsibility for the outcomes provided by AI-driven systems is perhaps the most important ethical consideration to be considered. If an AI-powered system guiding medical diagnosis makes a decision that leads to failed medical treatment, who should take responsibility? Is the AI developer, the technology firm that deployed the AI, or the doctor ultimately accountable for the bad information?



Quote for the day:

"A leader is one who sees more than others see, who sees farther than others see and who sees before others see.” -- Leroy Eimes