Showing posts with label smart device. Show all posts
Showing posts with label smart device. Show all posts

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.

Daily Tech Digest - March 15, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


Guardians of AIoT: Protecting Smart Devices from Data Poisoning

Machine learning algorithms rely on datasets to identify and predict patterns. The quality and completeness of this data determines the performance of the model is determined by the quality and completeness of this data. Data poisoning attacks tamper the knowledge of the AI by introducing false or misleading information and usually following these steps: The attacker manipulates the data by gaining access to the training dataset and injects malicious samples; The AI is now getting trained on the poisoned data and incorporates these corrupt patterns into its decision-making process; Once the poisoned data is deployed, the attackers now exploit it to bypass a security system or tamper critical tasks. ... The addition of AI into IoT ecosystems has intensified the potential attack surface. Traditional IoT devices were limited in functionality, but AIoT systems rely on data-driven intelligence, which makes them more vulnerable to such attacks and hence, challenge the security of the devices: AIoT devices collect data from different sources which increases the likelihood of data being tampered; The poisoned data can have catastrophic effects on the real-time decision making; Many IoT devices possess limited computational power to implement strong security measures which makes them easy targets for these attacks.


Preparing for The Future of Work with Digital Humans

For businesses to prepare their staff for the workplace of tomorrow, they need to embrace the technologies of tomorrow—namely, digital humans. These advanced solutions will empower L&D leaders to drive immersive learning experiences for their staff. Digital humans use various technologies and techniques like conversational AI, large language models (LLMs), retrieval augmented generation, digital human avatars, virtual reality (VR,) and generative AI to produce engaging and interactive scenarios that are perfect for training. Recall that a major issue with current training methods is that staff never have opportunities to apply the information they just consumed, resulting in the loss of said information. Digital humans avoid this problem by generating lifelike roleplay scenarios where trainees can actually apply and practice what they have learned, reinforcing knowledge retention. In a sales training example, the digital human takes on the role of a customer, allowing the employee to practice their pitch for a new product or service. The employee can rehearse in realistic conditions rather than studying the details of the new product or service and jumping on a call with a live customer. A detractor might push back and say that digital humans lack a necessary human element.


3 ways test impact analysis optimizes testing in Agile sprints

Code modifications or application changes inherently present risks by potentially introducing new bugs. Not thoroughly validating these changes through testing and review processes can lead to unintended consequences—destabilizing the system and compromising its functionality and reliability. However, validating code changes can be challenging, as it requires developers and testers to either rerun their entire test suites every time changes occur or to manually identify which test cases are impacted by code modifications, which is time-consuming and not optimal in Agile sprints. ... Test impact analysis automates the change analysis process, providing teams with the information they need to focus their testing efforts and resources on validating application changes for each set of code commits versus retesting the entire application each time changes occur. ... In UI and end-to-end verifications, test impact analysis offers significant benefits by addressing the challenge of slow test execution and minimizing the wait time for regression testing after application changes. UI and end-to-end testing are resource-intensive because they simulate comprehensive user interactions across various components, requiring significant computational power and time. 


No one knows what the hell an AI agent is

Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map. Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions. “They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.” But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai. “The concepts of AI ‘agents’ and ‘agentic’ workflows used to have a technical meaning,” Ng said in a recent interview, “but about a year ago, marketers and a few big companies got a hold of them.” The lack of a unified definition for agents is both an opportunity and a challenge, Jim Rowan, head of AI for Deloitte, says. On the one hand, the ambiguity allows for flexibility, letting companies customize agents to their needs. On the other, it may — and arguably already has — lead to “misaligned expectations” and difficulties in measuring the value and ROI from agentic projects. “Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes,” Rowan said. 


Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research. While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users. While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models. To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed "RM-sycophancy"—the tendency to exploit unintended biases in reward models in order to maximize reward scores.


Strategies for Success in the Age of Intelligent Automation

Firstly, the integration of AI into existing organizational frameworks calls for a largely collaborative environment. It is imperative for employees to perceive AI not as a usurper of employment, but instead as an ally in achieving collective organizational goals. Cultivating a culture of collaboration between AI systems and human workers is essential to the successful deployment of intelligent automation. Organizations should focus on fostering open communication channels, ensuring that employees understand how AI can enhance their roles and contribute to the organization’s success. To achieve this, leadership must actively engage with employees, addressing concerns and highlighting the benefits of AI integration. ... The ethical ramifications of AI workforce deployment demand meticulous scrutiny. Transparency, accountability, and fairness are integral and their importance can’t be overstated. It’s vital that AI-driven decisions are aligned with ethical standards. Organizations are responsible for establishing robust ethical frameworks that govern AI interactions, mitigating potential biases and ensuring equitable outcomes. The best way to do this requires implementing standards for monitoring AI systems, ensuring they operate within defined ethical boundaries.


AI & Innovation: The Good, the Useless – and the Ugly

First things first: there is good innovation, the kind that genuinely benefits society. AI that enhances energy efficiency in manufacturing, aids scientific discoveries, improves extreme weather prediction, and optimizes resource use in companies falls into this category. Governments can foster those innovations through targeted R&D support, incentives for firms to develop and deploy AI, “buy European tech” procurement policies, and investments in robust digital infrastructure. The Competitiveness Compass outlines similar strategies. That said, given how many different technologies are lumped together in the AI category—everything from facial recognition technology to smart ad tech, ChatGPT, and advanced robotics—it makes little sense to talk about good innovation and “AI and productivity” in the abstract. Most hype these days is about generative AI systems that mimic human creative abilities with striking aptitude. Yet, how transformative will an improved ChatGPT be for businesses? It might streamline some organizational processes, expedite data processing, and automate routine content generation. For some industries, like insurance companies, such capabilities may be revolutionary. For many others, its innovation footprint will be much more modest. 


Revolution at the Edge: How Edge Computing is Powering Faster Data Processing

Due to its unparalleled advantages, edge computing is rapidly becoming the primary supporting technology of industries where speed, reliability, or efficiency aren’t just useful but imperative. Just like edge computing helps industries remain functional and up to date, staying informed with the latest sports news is important for every fan. Follow Facebook MelBet and receive real-time alerts, insider information, and a touch of comedy through memes and behind-the-scenes videos all in one place. Subscribe and get even closer to the world of sport! Edge computing relies on IoT as its most crucial component since there are billions of connected devices producing an immense and constant amount of data that needs to be processed right away. IoT devices in the residential sector, such as smart sensors in homes or Nest smart thermostats, as well as peripherals used for industrial automation in factories, all use edge computing. ... The way edge computing will function in the future is very exciting. With 5G, AI, and IoT, edge technologies are likely to become smarter, more widespread, and faster. Imagine a world where factories optimize themselves, smart traffic systems talk to autonomous vehicles, and healthcare devices stop illnesses from happening before they start.


Harnessing the data storm: three top trends shaping unstructured data storage and AI

The sheer volume of unstructured information generated by enterprises necessitates a new approach to storage. Object storage offers a better, more cost-effective method for handling significant datasets compared to traditional file-based systems. Unlike traditional storage methods, object storage treats each data item as a distinct object with its metadata. This approach offers both scalability and flexibility; ideal for managing the vast quantities of images, videos, sensor data, and other unstructured content generated by modern enterprises. ... Data lakes, the centralized repositories for both structured and unstructured data, are becoming increasingly sophisticated with the integration of AI and machine learning. These enable organizations to delve deeper into their data, uncovering hidden patterns and generating actionable insights without requiring complex and costly data preparation processes. ... The explosion of unstructured data presents both immense opportunities and challenges for organizations in every market across the globe. To thrive in this data-driven era, businesses must embrace innovative approaches to data storage, management, and analysis that are both cost-effective and compliant with evolving regulations. 


Open Source Tools Seen as Vital for AI in Hybrid Cloud Environments

The landscape of enterprise open source solutions is evolving rapidly, driven by the need for flexibility, scalability, and innovation. Enterprises are increasingly relying on open source technologies to drive digital transformation, accelerate software development, and foster collaboration across ecosystems. With advancements in cloud computing, AI, and containerization, open source solutions are shaping the future of IT by providing adaptable and secure platforms that meet evolving business needs. The active and diverse community support ensures continuous improvement, making open source a cornerstone of modern enterprise technology strategies. Red Hat's portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat AI and Red Hat Ansible Automation Platform, provides robust platforms that support diverse workloads across hybrid and multi-cloud environments. Additionally, Red Hat's extensive partner ecosystem provides more seamless integration and support for a wide range of technologies and applications. Our commitment to open source principles and continuous innovation allows us to deliver solutions that are secure, scalable, and tailored to the needs of our customers. Open source has proven to be trusted and secure at the forefront of innovation


Daily Tech Digest - June 06, 2024

How AI will kill the smartphone

The great thing about AI is that it’s software-upgradable. When you buy an AI phone, the phone gets better mainly through software updates, not hardware updates. ... As we’re talking back and forth with AI agents, people will use earbuds and, increasingly, AI glasses to interact with AI chatbots. The glasses will use built-in cameras for photo and video multimodal AI input. As glasses become the main interface, the user experience will likely improve more with better glasses (not better phones), with improved light engines, speakers, microphones, batteries, lenses, and antennas. With the inevitable and inexorable miniaturization of everything, eventually a new class of AI glasses will emerge that won’t need wireless tethering to a smartphone at all, and will contain all the elements of a smartphone in the glasses themselves. ... Glasses will prove to be the winning device, because glasses can position speakers within an inch of the ears, hands-free microphones within four inches of the mouth and, the best part, screens directly in front of the eyes. Glasses can be worn all day, every day, without anything physically in the ear canal. In fact, roughly 4 billion people already wear glasses every day.


Million Dollar Lines of Code - An Engineering Perspective on Cloud Cost Optimization

Storage is still cheap. We should really still be thinking about storage as being pretty cheap. Calling APIs costs money. It's always going to cost money. In fact, you should accept that anything you do in the cloud costs money. It might not be a lot; it might be a few pennies. It might be a few fractions of pennies, but it costs money. It would be best to consider that before you call an API. The cloud has given us practically infinite scale, however, I have not yet found an infinite wallet. We have a system design constraint that no one seems to be focusing on during design, development, and deployment. What's the important takeaway from this? Should we now layer one more thing on top of what it means to be a software developer in the cloud these days? I've been thinking about this for a long time, but the idea of adding one more thing to worry about sounds pretty painful. Do we want all of our engineers agonizing over the cost of their code? Even in this new cloud world, the following quote from Donald Knuth is as true as ever.


The five-stage journey organizations take to achieve AI maturity

We are far from seeing most organizations fully versed in and comfortable with AI as part of their company strategy. However, Asana and Anthropic have outlined five stages of AI maturity; a guide executives can use to gauge where their company stands in implementing real transformative outcomes. Many respondents say they’re in either the first or second stage. Only seven percent claim they’ve achieved the highest stage. ... Asana and Anthropic conclude that boosting comprehension is important, offering resources, training programs and support structures for knowledge workers to improve their education. Companies must also prioritize AI safety and reliability, meaning that AI vendors should be selected with “complete, integrated data models and invest in high-quality data pipelines and robust governance practices.” AI responses must be interpretable to facilitate decision-making and should always be controlled and directed by human operators. Other elements of organizations in Stage 5 include embracing a human-centered approach, developing strong comprehensive policies and principles to navigate AI adoption responsibly, and being able to measure AI’s impact and value


Unauthorized AI is eating your company data, thanks to your employees

A major problem with shadow AI is that users don’t read the privacy policy or terms of use before shoveling company data into unauthorized tools, she says. “Where that data goes, how it’s being stored, and what it may be used for in the future is still not very transparent,” she says. “What most everyday business users don’t necessarily understand is that these open AI technologies, the ones from a whole host of different companies that you can use in your browser, actually feed themselves off of the data that they’re ingesting.” ... Using AI, even officially licensed ones, means organizations need to have good data management practices in place, Simberkoff adds. An organization’s access controls need to limit employees from seeing sensitive information not necessary for them to do their jobs, she says, and longstanding security and privacy best practices still apply in the age of AI. Rolling out an AI, with its constant ingestion of data, is a stress test of a company’s security and privacy plans, she says. “This has become my mantra: AI is either the best friend or the worst enemy of a security or privacy officer,” she adds. “It really does drive home everything that has been a best practice for 20 years.”


How a data exchange platform eases data integration

As our software-powered world becomes more and more data-driven, unlocking and unblocking the coming decades of innovation hinges on data: how we collect it, exchange it, consolidate it, and use it. In a way, the speed, ease, and accuracy of data exchange has become the new Moore’s law. Safely and efficiently importing a myriad of data file types from thousands or even millions of different unmanaged external sources is a pervasive, growing problem. ... Data exchange and import solutions are designed to work seamlessly alongside traditional integration solutions. ETL tools integrate structured systems and databases and manage the ongoing transfer and synchronization of data records between these systems. Adding a solution for data-file exchange next to an ETL tool enables teams to facilitate the seamless import and exchange of variable unmanaged data files. The data exchange and ETL systems can be implemented on separate, independent, and parallel tracks, or so that the data-file exchange solution feeds the restructured, cleaned, and validated data into the ETL tool for further consolidation in downstream enterprise systems.


AI is used to detect threats by rapidly generating data that mimics realistic cyber threats

When we talk about AI, it’s essential to understand its fundamental workings—it operates based on the data it’s fed. Hence, the data input is crucial; it needs to be properly curated. Firstly, ensuring anonymisation is key; live customer data should never be directly integrated into the model to comply with regulatory standards. Secondly, regulatory compliance is paramount. We must ensure that the data we feed into the framework adheres to all relevant regulations. Lastly, many organisations grapple with outdated legacy tech stacks. It’s essential to modernise and streamline these systems to align with the requirements of contemporary AI technology. Also, mitigating bias in AI is crucial. Since the data we use is created by humans, biases can inadvertently seep into the algorithms. Addressing this issue requires careful consideration and proactive measures to ensure fairness and impartiality. ... It’s important for people to be highly aware of biases and misconceptions surrounding AI. We need to be conscious of the potential biases in AI systems. 


Tackling Information Overload in the Age of AI

The reason this story is so universal is that the kind of information that drives knowledge-intensive workflows is unstructured data, which has stubbornly resisted the automation wave that has taken on so many other enterprise workflows using software and software-as-a-service (SaaS). SaaS has empowered teams with tools they can use to efficiently manage a wide variety of workflows involving structured data. However, SaaS offerings have been unable to take on the core “jobs to be done” in the knowledge-intensive enterprise because they can’t read and understand unstructured data. They aren’t capable of performing human-like services with autonomous decision-making abilities. As a result, knowledge workers are still stuck doing a lot of monotonous and undifferentiated data work. However, newly available large language models (LLMs) and generative AI excel at processing and extracting meaning from unstructured data. LLM-powered “AI agents” can perform services such as reading and summarizing content and prioritizing work and can automate multistage knowledge workflows autonomously.


CDOs Should Understand Business Strategy to Be Outcome-focused

To be outcome-focused, the CDO has to prioritize understanding the business or corporate strategy, he says. In addition, one needs to comprehend the organizational aspirations and how to deliver on key business outcomes, which could include monetization of commercial opportunities, risk mitigation, cost savings, or providing client value. Next, Thakur advises leaders to focus on the foundational data and analytic capabilities to drive business outcomes. There must be a well-organized data and analytic strategy to start with, a good tech stack, an analytic environment, data management, and governance principles. While delivering on some of the use cases may take time, it is imperative to have quick wins along the way, says Thakur. He recommends CDOs create reusable data products and assets while having an agile operationalization process. Then, Thakur suggests data leaders create a solid engagement model to ensure that the data analytics team is in sync with business and product owners. He urges leaders to put an effective ideation and opportunity management framework into action to capture business ideas and prioritize use cases.


Besides the traditional functions of sales and finance, there is a growing demand for tech-driven talent in the sector. It’s important to note that the demand for technology expertise isn’t limited to software development but encompasses different competencies, such as cybersecurity, UI/UX development, AI/ML engineering, digital marketing and data analytics. This is expected as there has been an increase in the use of AI and ML in the BFSI landscape, most prominently in fraud detection, KYC verification, sales and marketing processes. ... Now, new-age competencies such as digital skills, data analysis, AI and cybersecurity are increasingly becoming part of these programmes. To meet the growing demand for specialised skills and roles, many BFSI organisations encourage employees with financial expertise to develop digital skills that enable them to work more efficiently. ... At the crossroads of significant industry-level transformations, employers expect a variety of soft skills in addition to technical competencies. 


Cyber Resilience Act Bans Products with Known Vulnerabilities

In future, manufacturers will no longer be allowed to place smart products with known security vulnerabilities on the EU market – if they do, they could face severe penalties ... When it comes to cyber resilience, the legislation of the Cyber Resilience Act makes it clear that customers – both residential and commercial – have an effective right to secure software. However, the race to be the first to discover vulnerabilities continues: organisations would be well advised to implement both effective CVE detection and impact assessment now to better scrutinise their own products and protect themselves against the serious consequences of vulnerability scenarios. “The CRA requires all vendors to perform mandatory testing, monitoring and documentation of the cybersecurity of their products, including testing for unknown vulnerabilities known as ‘zero days’,” said Jan Wendenburg, CEO of ONEKEY, a cybersecurity company based in Duesseldorf, Germany. ... Many manufacturers and distributors are not sufficiently aware of potential vulnerabilities in their own products. 



Quote for the day:

"Life always begins with one step outside of your comfort zone." -- Shannon L. Alder

Daily Tech Digest - June 23, 2020

Four Steps Public-Sector CIOs Should Take To Break Down Silos Impeding Innovation

Government agencies, almost by design, are large and slow-moving. When something goes wrong, the response is often to add another policy and another layer of approvals and reviews. This slows things down even more and frustrates efforts by CIOs and other decision-makers to make informed and timely choices. Further inhibiting—and complicating—operations, individual mission centers facing bureaucratic barriers often create their own duplicative capabilities, delivered quickly and effectively, but just for their own use. These silos are especially common when it comes to information technology and are given the pejorative label of “Shadow IT” by CIOs and others at the enterprise level who want to assert control over all agency technology. ... Don’t reinvent solutions just because that’s the way it’s been done. Resist the urge to customize. Change your policies and practices, if you can, so you can set and use standards that break down application, data and user silos. Push back internally on those policies that exist for the lowest common denominator. Challenge your technologists to leverage these standards and build tools that can solve enterprise problems at speed and scale. 


Italian Banking Association ready to trial Central Bank Digital Currency

In the announcement it read, " Italian banks are available to participate in projects and experiments of a digital currency of the European Central Bank, contributing, thanks to the skills acquired in the creation of infrastructure and distributed governance, to speed up the implementation of a European-level initiative in a first nation." A year ago the Association of Italian Banks set up a working group dedicated to deepening the understanding related to digital coins and crypto assets. From this group 10 recommenations were announced that include: Monetary stability and full respect for the European regulatory framework must be preserved as a matter of priority; Italian banks are already operating on a Distributed ledger technology Dlt infrastructure with the Spunta project. They intend to be part of the change brought about by an important innovation such as digital coins; A programmable digital currency represents an innovation in the financial field capable of profoundly revolutionizing money and exchange. This is a transformation capable of bringing significant potential added value, in particular in terms of the efficiency of the operating and management processes. ...


The next software disruption: How vendors must adapt to a new era

The rise of PaaS has changed what it takes to be a successful enterprise-software vendor. As PaaS services become more sophisticated, software application vendors have a tougher time justifying a price premium for products that could be delivered with a thin user interface on top of generic PaaS services. With PaaS tools giving attackers and customers themselves the means to develop new applications quickly, software vendors that do not innovate in kind will face increased risk. Software vendors need to defend their share of the profit pool by taking a clear look at where they have the best and most defendable opportunities to differentiate themselves. Rather than going head-to-head with the Big Three, one strategy is to specialize and tailor solutions to the needs of targeted verticals and use cases. This strategy proved successful in the early 2010s, when SaaS disruptors first entered the market. The legacy-software vendors that were closest to the customer and had a high degree of industry and domain expertise protected their market share and maintained their enterprise value-to-revenue multiples while customers that stressed differentiation on the basis of their technology were more vulnerable



How Manufacturers Can Address Cybercrime in the Ongoing Pandemic

Security has never been a top priority for manufacturers. Security features and best practices are often not taken into account when new products are purchased. With COVID-19 requiring companies across all industries to explore remote workforce options, manufacturing companies prioritized, and invested in, automation systems that make it easier for their employees to do their jobs from the safety of their homes. Although it is encouraging to see companies making investments to support their employees, many automation tools are being purchased without considering their security features. Standard security best practices such as checking for previous reported vulnerabilities, changing factory settings and passwords, and training employees in the secure ways to use the new solutions are not happening. With fewer guards and controls in place, it's easy for industrial control systems to be hacked simply through accident or user error. Despite the challenges plaguing the industry -- outdated technology, a disconnect between safety and security, and vulnerabilities associated with remote work operations -- there are small steps that manufacturers can take to significantly improve their security posture.


IoT Security Is a Mess. Privacy 'Nutrition' Labels Could Help

At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device's security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties. “In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous," says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. "It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers." Nutrition labels on packaged foods have a certain amount of standardization around the world, but they're still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber.


Smart Devices: How Long Will Security Updates Be Issued?

Europe's automobile industry is bound by regulations for supporting vehicle components to ensure consumers have access to critical parts, says Brad Ree, CTO of ioXt and board member with the ioXt Alliance, which is a trade group dedicated to securing IoT devices. But Ree says with connected devices, no regulator has yet made the leap to ensure that the software is supported for an extended period. "Right now, consumers really don't know how long the product is going to be supported," Ree says. That's critical because smart devices cost more than devices without software control features. The U.S. is trying to nudge manufacturers in the right direction. Two years ago, the National Telecommunications and Information Administration created a document about what type of information companies should clearly communicate to consumers before they buy a smart device. The voluntary recommendations include describing whether and how a device receives security updates and the anticipated timeline for the end of security support. 



Why the open source DBaaS market is hot

"The good news is that there's a lot of open source database choice for organizations," said James Curtis, senior research analyst at S&P Global. "The bad news is that there's a lot open source choice and that can cause some confusion." While a growing number of vendors support open source database products, the public cloud vendors also offer versions of many popular open source databases, Curtis noted. For example, AWS boasts a managed Cassandra service, as well as support for MySQL and PostgreSQL with its Relational Database Service (RDS). When they get ready to decide on which route to take, Curtis said that organizations need to choose a vendor that provides the support they are looking for. For open source database vendors, DBaaS might also represent a threat as it has the potential to replace or cannibalize existing on-premises deployments. Among DBaaS benefits, one of the most important is reducing the time organizations need to spend managing the infrastructure. "What will happen in the future is that database workloads will gravitate to the right environment in which it makes sense to run that workload," Curtis said. "Some workloads are best suited to run on premises and perhaps always will."


Organizations Must Reset Expectations to Spring Back from Pandemic

The first step is identifying an organization’s critical assets and the missions they support. The SEI's foundational process improvement approach to operational resilience management, the CERT Resilience Management Model (CERT-RMM), defines four asset types: people, facilities, technology, and information. "The COVID-19 crisis has impaired our people and our facilities, so it’s akin to a natural disaster," said Butkovic. However, most disaster plans did not anticipate that the event would affect everyone, everywhere. "Typically, you don’t have fires at all of your facilities at the same time, with little notion of when they’ll be put out. In that way, there are lessons to be learned from cyber events, which can affect all locations simultaneously." During a cyber attack, an organization might keep its technology assets out of harm's way by modifying firewall rules. During the COVID-19 pandemic, most human assets are keeping out of harm’s way by staying away from the workplace. But not all safeguards can remain in place forever. 


The Future of Work: Best Managed with Agility, Diversity, Resilience

While the future is uncertain, one clear trend is that remote work will play a larger role during and after the pandemic. After experiencing several weeks of office closures, organizational leaders are questioning the wisdom of maintaining the same amount of office space because in most cases, employees have proved they can be productive and collaborate effectively while working remotely. On the flip side, some employees have discovered they prefer working at home, at least part-time. To affect social distancing in the short-term, employers must rethink space utilization. Interestingly, they may find they've stumbled upon their longer-term strategy, which is some version of a partly remote, partly on-site workforce. With digital transformation, more tasks and processes are aided or facilitated by software. Meanwhile, the organizations' tech stacks are becoming increasingly virtual (cloud-based), intelligent (machine learning and AI), and diverse (including IoT). However, digital transformation isn't just about technology implementation, it's also about cultural transformation which reflects greater diversity and cross-departmental collaboration.


Building Resiliency in the Age of Disruption and Uncertainty

Attendees discussed how risk needs to be managed holistically. James Fong, Regional Business Director at RSA, highlighted the need to view risk in the context of four pillars namely, operations, workforce, supply chain and cybersecurity. Fong said that “Operational risk management, IT and security risk management, regulatory and corporate compliance, business resiliency, third party governance and audit management, need to be part of an integrated risk management plan.” Fong continued “Risk data needs to be shared on customised dashboards for executives, CISOs and others. The data needs to give a clear understanding of the monetary cost associated with the risk. For example, how much is a risk worth? What is the cost of the threat?” Importantly, organisations need to understand the risk associated with third party suppliers. A more common view expressed is that no matter how much you prepare yourself, there will always be instances when organisations need to react to situational change. For example, incoming threats that can choke or change content in the media industry.



Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall