Showing posts with label standard. Show all posts
Showing posts with label standard. Show all posts

Daily Tech Digest - January 15, 2025

Passkeys: they're not perfect but they're getting better

Users are largely unsure about the implications for their passkeys if they lose or break their device, as it seems their device holds the entire capability to authenticate. To trust passkeys as a replacement for the password, users need to be prepared and know what to do in the event of losing one – or all – of their devices. ... Passkeys are ‘long life’ because users can’t forget them or create one that is weak, so if they’re done well there should be no need to reset or update them. As a result, there’s an increased likelihood that at some point a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market. ... For passkey-protected accounts, potential attackers are now more likely to focus on finding weaknesses in account recovery and reset requests – whether by email, phone or chat – and pivot to phishing for recovery keys. These processes need to be sufficiently hardened by providers to prevent trivial abuse by these attackers and to maintain the security benefits of using passkeys. Users also need to be educated on how to spot and report abuse of these processes before their accounts are compromised.


Securing Payment Software: How the PCI SSF Modular System Enhances Flexibility and Security

The framework was introduced to replace the aging Payment Application Data Security Standard (PA-DSS), which primarily focused on payment application security. As software development technologies and methodologies rapidly evolved, the need for a dynamic and adaptable security standard became increasingly apparent. Consequently, this realization prompted the creation of the PCI SSF. As a result, the PCI SSF encompasses a broader range of security requirements specifically tailored for modern software environments. ... The modular system of the PCI SSF is specifically designed to offer both flexibility and scalability, thereby enabling organizations to address their specific security needs based on their unique software environments. In addition, the modular approach allows organizations to select and implement only the components relevant to their software, which, in turn, simplifies the process of achieving and maintaining compliance. ... The PCI SSF’s modular system marks a transformative step in payment software security, effectively balancing adaptability with comprehensive protection against evolving cyber threats. Moreover, its flexible, scalable, and comprehensive approach allows organizations to tailor their security efforts to their unique needs, thereby ensuring robust protection for payment data.


The cloud cost wake-up call I predicted

Cloud computing starts as a flexible and budget-friendly option, especially with its enticing pay-per-use model. However, unchecked growth can turn this dream into a financial nightmare due to the complexities the cloud introduces. According to the Flexera State of the Cloud Report, 87% of organizations have adopted multicloud strategies, complicating cost management even more by scattering workloads and expenses across various platforms. The rise of cloud-native applications and microservices has further complicated cost management. These systems abstract physical resources, simplifying development but making costs harder to predict and control. Recent studies have revealed that 69% of CPU resources in container environments go unused, a direct contradiction of optimal cost management practices. Although open-source tools like Prometheus are excellent for tracking usage and spending, they often fall short as organizations scale. ... A critical component of effective cloud cost management is demystifying cloud pricing models. Providers often lay out their pricing structures in great detail, but translating them into actual costs can be difficult. A lack of understanding can lead to spiraling costs.


Using cognitive diversity for stronger, smarter cyber defense

Cognitive biases significantly influence decision-making during cybersecurity incidents by framing how individuals interpret information, assess risks, and respond to threats. ... Integrating cognitive science into cybersecurity tools involves understanding how human cognitive processes – such as perception, memory, decision-making, and problem-solving – affect security tasks. Designing user-friendly tools requires aligning cognitive models with diverse user behaviors while managing cognitive load, ensuring usability without compromising security, and adapting to the fast-changing cybersecurity landscape. Interfaces must cater to varying skill levels, promote awareness, and support effective decision-making, all while addressing ethical considerations like privacy and bias. Interdisciplinary collaboration between psychology, computer science, and cybersecurity experts is essential but challenging due to differences in expertise and communication styles. ... Cognitive diversity can frequently divert resources or distract from present, immediate or emerging threats. Focus on the things that are likely to happen. Implement defensive measures which require little resource while more complex measures are prioritized.


Next-gen Ethernet standards set to move forward in 2025

Beyond the big-ticket items of higher bandwidth and AI, a key activity in any year for Ethernet is interoperability testing for all manner of existing and emerging specifications. 200 Gigabits per second per lane is an important milestone on the path to an even higher bandwidth Ethernet specification that will exceed 1 Terabit per second. ... With 800GbE now firmly established, adoption and expansion into ever larger bandwidth will be a key theme in 2025. There will be no shortage of vendors offering 800 GbE equipment in 2025, but when it comes to Ethernet standards, focus will be on 1.6 Terabits/second Ethernet. “As 800GbE has come to market, the next speed for Ethernet is being talked about already,” Martin Hull, vice president and general manager for cloud and AI platforms at Arista Networks, told Network World. “1.6Tb Ethernet is being discussed in terms of the optics, the form factors and use cases, and we expect industry leaders to be trialing 1.6T systems towards the end of 2025.” ... “High-speed computing requires high bandwidth and reliable interconnect solutions,” Rodgers said. “However, high-speed also means high power and higher heat, placing more demands on the electrical grid and resources and creating a demand for new options.” That’s where LPOs will fit in.


Stop wasting money on ineffective threat intelligence: 5 mistakes to avoid

“CTI really needs to fall underneath your risk management and if you don’t have a risk management program you need to identify that (as a priority),” says Ken Dunham, cyber threat director for the Qualys Threat Research Unit. “It really should come down to: what are the core things you’re trying to protect? Where are your crown jewels or your high value assets?” Without risk management to set those priorities, organizations will not be able to appropriately set requirements for intelligence collection that will have them gather the kind of relevant sources that pertain to their most valuable assets. ... Bad intelligence can often be worse than none, leading to a lot of time wasted by analysts to validate and contextualize poor quality feeds. Even worse, if this work isn’t done appropriately, poor quality data could potentially even lead to misguided choices at the operational or strategic level. Security leaders should be tasking their intelligence team with regularly reviewing the usefulness of their sources based on a few key attributes. ... Even if CTI is doing an excellent job collecting the right kind of quality intelligence that its stakeholders are asking for, all that work can go for naught if it isn’t appropriately routed to the people that need it — in the format that makes sense for them.


Exposure Management: A Strategic Approach to Cyber Security Resource Constraint

XM is a proactive and integrated approach that provides a comprehensive view of potential attack surfaces and prioritises security actions based on an organisation’s specific context. It’s a process that combines cloud security posture, identity management, internal hosts, internet-facing hosts and threat intelligence into a unified framework, enabling security teams to anticipate potential attack vectors and fortify their defences effectively. Unlike traditional security measures, XM takes an “outside-in” approach, assessing how attackers might exploit vulnerabilities across interconnected systems. This shift in mindset is crucial for identifying and prioritising the most significant threats. By focusing on the most critical vulnerabilities and potential attack paths, XM allows security teams to allocate resources more efficiently and enhance their overall security posture. ... By providing a unified view of the entire attack path, XM improves an organisation’s ability to manage security risks. This unified view allows security teams to understand how vulnerabilities can be exploited and prioritise those that pose the greatest risk. Security teams are then able to guarantee efficient resource allocation and focus on threats with the most significant impact on business operations.


How GenAI is Exposing the Limits of Data Centre Infrastructure

Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centres. There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centres need more energy, and create more heat. Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data centre. Unless servers become three times bigger than they were before, efficient heat removal is needed. 


The Configuration Crisis and Developer Dependency on AI

As our IT infrastructure grows ever more modular, layered and interconnected, we deal with myriad configurable parts — each one governed by a dense thicket of settings. All of our computers — whether in our pockets, on our desks or in the cloud — have a bewildering labyrinth of components with settings to discover and fiddle with, both individually and in combination. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. AIs are pattern recognizers that can help us see and work with the common underlying patterns.


From project to product: Architecting the future of enterprise technology

Modern enterprise architecture requires thinking like an urban planner rather than a building inspector. This means creating environments that enable innovation while ensuring system integrity and sustainability. ... Just as urban planners need to develop a shared vocabulary with city officials, developers and citizens, enterprise architects must establish a common language that bridges technical and business domains. Complex ideas that remain purely verbal often get lost or misunderstood. Documentation and diagrams transform abstract discussions into something tangible. By articulating fitness functions — automated tests tied to specific quality attributes like reliability, security or performance — teams can visualize and measure system qualities that align with business goals. ... Technology governance alone will often just inform you of capability gaps, tech debt and duplication — this could be too late! Enterprise architects must shift their focus to business enablement. This is much more proactive in understanding the business objectives and planning and mapping the path for delivery. ... Just as cities must evolve while preserving their essential character, modern enterprise architecture requires built-in mechanisms for sustainable change. 



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein

Daily Tech Digest - February 26, 2023

Skills-based hiring continues to rise as degree requirements fade

Hard skills, such as cybersecurity and software development, are still in peak demand, but organizations are finding soft skills can be just as importanr, according to Jamie Kohn, research director in the Gartner Research’s human resources practice. Soft skills, which are often innate, include adaptability, leadership, communications, creativity, problem solving or critical thinking, good interpersonal skills and the ability to collaborate with others. “Also, people don’t learn all their [hard] skills at college,” Kohn said. “They haven’t for some time, but there’s definitely a surge in self-taught skills or taking online courses. You may have a history major who’s a great programmer. That’s not at all unusual anymore. Companies that don’t consider that are missing out by requiring specific degrees.” ... Much of the recent shift to skills-based hiring is due to the dearth of tech talent created by the Great Resignation and a growin number of digital transformation projects. While the US unemployment rate hovers around 3.5%, in technology fields, it’s less than half that (1.5%).


Do digital IDs work? The nation with one card for everything

No system is foolproof: a potential security flaw uncovered in 2017 required the temporary suspension of 800,000 cards. Users occasionally receive notifications that some data may have been compromised. Nor does it eliminate fraud. According to official figures, Estonians were scammed out of €5 million (£4.4 million) last year, through a familiar mixture of fraudulent calls, phishing messages and fake websites in which they were tricked into handing over passwords. But in two decades no one has decrypted the technology itself. Indeed, the system has become so embedded in Estonian life it is hard to find anyone who knocks it. Its few critics are mostly to be found among the ranks of the populist right-wing Conservative People’s Party of Estonia (Ekre), which is expected to take third place in the election. The party has expressed scepticism about the security of online voting — and its supporters are less likely to use it than supporters of mainstream parties. But although it spent two years in a coalition government from 2019, Ekre did nothing to dismantle the system.


Microsoft trains ChatGPT to control robots

The team plans to leverage the platform's ability to develop coherent and grammatically correct responses to various prompts and questions and see if ChatGPT can think beyond the text and reason about the physical world to help with robotics tasks. "We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems." The key obstacle in the way for a language model based on AI is to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world. Even though ChatGPT can do a lot alone, it still needs some help. Microsoft has released a series of design principles, including unique prompting structures, high-level APIs, and human feedback via text. These models can be used to guide language models toward solving robotics tasks. The firm is also introducing PromptCraft, an open-source platform where anyone can "share examples of prompting strategies for different robotics categories."


Job Interview Advice for Junior Developers

Remember the audience you are talking to. Human resources personnel are very different to the dev guy sitting in the interview with them. You won’t be working with anyone in HR, but they are the gatekeepers. In smaller outfits, there will probably be a manager type and a dev guy. It doesn’t hurt to empathize with how their views differ. Make sure you are comfortable with all the parts of the agile development cycle, not just the bits your team happened to pick up in your last project. I have never used pull requests, but they are de rigueur in may places. Similarly code reviews. Retrospectives might seem pointless to you, but how else do you improve a process? Don’t assume that what your team called DevOps or Kanban is exactly the same as what everyone else does. One of the biggest snags in an interview — I think the most common — is when the interviewee shows little enthusiasm for an aspect of development the interviewer happens to thinks is important. Your negative hot take on a mainstream process, especially without experience weighing in behind you, may well be seen as a sign of rigidity.


Cyberattacks hit data centers to steal information from global companies

Resecurity identified several actors on the dark web, potentially originating from Asia, who during the course of the campaign managed to access customer records and exfiltrate them from one or multiple databases related to specific applications and systems used by several data center organizations. In at least one of the cases, initial access was likely gained via a vulnerable helpdesk or ticket management module that was integrated with other applications and systems, which allowed the threat actor to perform a lateral movement. The threat actor was able to extract a list of CCTV cameras with associated video stream identifiers used to monitor data center environments, as well as credential information related to data center IT staff and customers, Resecurity said. Once the credentials were collected, the actor performed active probing to collect information about representatives of the enterprise customers who manage operations at the data center, lists of purchased services, and deployed equipment.


Avoiding vendor lock-in and the data gravity trap

Today, enterprises are adopting hybrid cloud and multicloud strategies to avoid vendor lock-in and the data gravity trap. In fact, many enterprises are choosing vendors who provide cloud-agnostic services that are also open source to benefit from the most freedom and to avoid vendor lock-in. In addition, working with vendors with large partner ecosystems can help reduce the risks of lock-in. Open standards are the antidote to proprietary technology. Open standards allow users to move freely between vendors—to mix and match or integrate—with competing vendors to create their own solution. They allow you to compose your service or system freely and liberate you from the proprietary interfaces of a vendor. Open standards came about from our prior experiences of vendor lock-in. If we forget that history, we are condemned to repeat it. ... While the general movement is toward the concept of “composable IT,” where software-defined infrastructure and application components interoperate seamlessly to make businesses nimble, there are times when vendor lock-in makes sense.


Boards tapping CIOs to drive strategy

“CIOs are playing a leading role in orchestrating transformation and are stepping up in response to the changing industry dynamics,” said Logicalis CEO Bob Bailkoksi, “yet they are faced with challenges to navigate, including a potential recession and talent shortages.” “In addition to this, they are experiencing increased pressure to deliver digital-based outcomes for their organisations, giving them more exposure to their boards and requiring a different way of operating.” In many companies, that “different way” seems to be that their traditional remit – implementing technology to support business strategy – has changed dramatically: 41 per cent reported having some level of responsibility for business strategy as well as the technology to deliver it, and 80 per cent said business strategy will become a bigger part of their role in the next two years. Innovation and digital transformation are two of four primary areas of focus for CIOs identified in the study – the others being strategy and the reimagination of service partnerships, which is expected to see three-quarters of CIOs spending more on IT outsource management this year than last.


How Companies Are Using Data to Optimize Manufacturing Operations

Real-time agility requires combining data from multiple sources to create new insights for use cases like machine learning. Manufacturers are familiar with streaming data today, but this is nowhere near the point of saturation. Everything from supply chain shortages to COVID-19 sick time and weather disruptions has made the simple task of getting shipments from point A to B complicated and uncertain. However, companies that consolidate data from asset-based sensors, predictive maintenance algorithms, and ordering and staffing systems (such as enterprise resource management, supply chain management and human resources software), can use it to respond much more quickly, and keep assets operating at maximum efficiency. Industrial asset management, especially in transportation, allows companies to showcase the value of real- or near-real-time data in a range of scenarios, from airlines to railroads. Large-scale assets deployed in the past few years generate a tremendous amount of streaming data. 


3 ways to retool your architecture to reduce costs

When you have one team developing and operating one application deployed on one server, it's not difficult to figure out the infrastructure, labor, and software costs of developing the application. However, when you have thousands of applications, it becomes a real mess to trace every penny flowing into your application's TCO. ... The reality is far more complicated. For platforms, especially hybrid cloud, some costs may go away immediately, while some will be redistributed to new application portfolios. The labor costs associated with the application will remain unless you actually eliminate that labor; otherwise, those costs will be redistributed across the other applications the team supports. In other words, labor costs will increase for those other applications. ... There are two views of measuring excess capacity. First, to decrease unused reservations by application owners, you must allocate total platform cost by total reservations. However, for platform owners to understand excess capacity compared to the total built capacity, you must allocate the total platform cost by the total built capacity.


4 steps to supercharge IT leaders

IT leaders understand well that it’s not just willpower that leads to success; it’s also “way power,” or the “how” of achieving your goals. Being able to find a pathway forward is just the starting point. Leaders today must be able to pivot in real time when a new opportunity, obstacle, or crisis surfaces. The more options you have, the more likely you are to achieve your goals. Consider the example of a complex digital initiative. Let’s say you’ve identified a way forward plus a backup plan for your top purposes. Implementation will almost certainly be nonlinear. What happens when you hit obstacles? ... Many things cloud or limit our vision or even blindside us. We are all affected by our personalities, beliefs, and assumptions. We are also affected by the data that we choose to rely on. How can you be more confident that what you’re seeing is real? Start by taking another look at your external priorities. You might be exaggerating or discounting the opportunities or threats that you see. ... How confident are you about timing for essential deliverables on your critical path? 



Quote for the day:

"The speed of the leader is the speed of the gang." -- Mary Kay Ash

Daily Tech Digest - August 09, 2022

Deepfakes Grow in Sophistication, Cyberattacks Rise Following Ukraine War

The use of deepfakes to evade security controls and compromise organizations is on the rise among cybercriminals, with researchers seeing a 13% increase in the use of deepfakes compared with last year. That's according to VMware's eighth annual "Global Incident Response Threat Report," which says that email is usually the top delivery method. The study, which surveyed 125 cybersecurity and incident response (IR) professionals from around the world, also reveals an uptick in overall cybersecurity attacks since Russia's invasion of Ukraine; extortionary ransomware attacks including double extortion techniques, data auctions, and blackmail; and attacks on APIs. "Attackers view IT as the golden ticket into an organization's network, but unfortunately, it is just the start of their campaign," explains Rick McElroy, principal cybersecurity strategist at VMware. "The SolarWinds attack gave threat actors looking to target vendors a step-by-step manual of how to successfully pull off an attack." He says that keeping this in mind, IT and security teams need to work hand in hand to ensure all access points are secure to prevent an attack like that from harming their own organization.


How CFOs and CISOs Can Build Strong Partnerships

“There is no substitute for regular communication,” he said. “In addition to the formal, structured channels, I have found it most helpful to just talk to Lena and her team about key initiatives, any issues concerning them, and overall trends in security and the business more broadly.” If possible, conversations between the CISO and chief financial officer should also include the chief privacy officer, said Raj Patel, partner and cybersecurity practice leader at consulting firm Plante Moran. “Each has a role in protecting data and assets,” he said. “The conversation can start simply by scheduling a meeting around it.” These talks should take place at least quarterly, according to Patel, and should not be focused solely on the budget. “We don’t fight a war on budgets but do what we need to defend ourselves,” he said. “When our organizations get attacked every day, we are in a war. Many finance executives focus on a budget and at times compare it to prior budgets. When it comes to cybersecurity, the focus needs to be on risk, and allocating financial resources should be based on risk.”


The cloud ate my database

The first version of PostgreSQL was released in 1986, and MySQL followed less than a decade later in 1995. Neither displaced the incumbents—at least, not for traditional workloads. MySQL arguably took the smarter path early on, powering a host of new applications and becoming the “M” in the famous LAMP stack (Linux, Apache, MySQL, PhP/Perl/Python) that developers used to build the first wave of websites. Oracle, SQL Server, and DB2, meanwhile, kept to their course of running the “serious” workloads powering the enterprise. Developers loved these open source databases because they offered freedom to build without much friction from traditional gatekeepers like legal and purchasing. Along the way, open source made inroads with IT buyers, as Gartner showcases. Then the cloud happened and pushed database evolution into overdrive. Unlike open source, which came from smaller communities and companies, the cloud came with multibillion-dollar engineering budgets, as I wrote in 2016. Rather than reinvent the open source database wheel, the cloud giants embraced databases such as MySQL and turned them into cloud services like Amazon RDS.


Everything CISOs Need to Know About NIST

When it comes to protecting your data, NIST is the gold standard. That said, the government does not mandate it for every industry. CISOs should comply with NIST standards, but business leaders can handle risk management with whichever approach and standards they believe will best suit their business model. However, federal agencies must use these standards. As the U.S. government endorses NIST, it came as little surprise when Washington declared these standards the official security control guidelines for information systems at federal agencies in 2017. Similarly, if CISOs work with the federal government as contractors or subcontractors, they must follow NIST security standards. With that in mind, any contractor who has a history of NIST noncompliance may be excluded from future government contracts. The Cybersecurity Framework is one of the most widely adopted standards from NIST. While optional, this framework is a trusted resource that many companies adhere to when attempting to reduce risk and improve their cybersecurity systems and management. 


What Does The Future Hold For Serverless?

In production-level serverless applications, monitoring your application is paramount to your success. You need to know if you’ve dropped any events, where the bottlenecks are, and if items are piling up in dead letter queues. Not to mention you need the ability to trace a transaction end to end. This is an area that is finally beginning to take off. As more and more serverless production workloads are coming online, it is becoming increasingly obvious there’s a gap in this space. Vendors like DataDog, Lumigo, and Thundra all attempt to solve this problem - with pretty good success. But it needs to be better. In the future we need tools like what the vendors listed above offer, but with optimization and insights built-in like AWS Trusted Advisor. We need app monitoring to evolve. When we hear application monitoring, we need to assume more than service graphs and queue counts. Application monitoring will become more than fancy dashboards and slack messages. It will eventually tell us we provisioned the wrong infrastructure from the workload it sees.


Cybersecurity on the board: How the CISO role is evolving for a new era

More and more businesses agree. Gartner's survey of board directors found that 88% view cybersecurity as not only a technical problem for IT departments to solve, but a fundamental risk to how their businesses operate. That’s hardly surprising, given the recent history of hacks against private businesses. ... Ensuring the CISO has a seat on the board is one way of ensuring a company has a firm handle on how to handle these risks to the business. Even so, says Andrew Rose, resident CISO at security company Proofpoint, they should be careful in how they communicate their concerns. “The 'sky is falling' narrative can be used once or twice, but after that, the board will become a bit numb to it all,” Rose explains. Forcing boards to prioritise cybersecurity should instead be done through positive affirmation, argues Carson - and, ideally, be framed in how shoring up the company’s defences will help it perform better in the long term. “You need to show them how this is going to help the business be successful, how it will help employees to do their jobs better, provide value to the shareholders, [and] return an investment,” he says.


Neuro-symbolic AI brings us closer to machines with common sense

Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades. Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems. Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and approximating functions and more toward the idea of all the things the human mind does when you’re modeling the world, explaining and understanding the things you’re seeing, imagining things that you can’t see but could happen, and making them into goals that you can achieve by planning actions and solving problems?”


IT Security Decision-Makers Struggle to Implement Strategies

While businesses still have many privileged identities left unprotected, such as application and machine identities, attackers will continue to exploit and impact business operations in return for a ransom payment, Carson said. "The good news is that organizations realize the high priority of protecting privileged identities," he added. "The sad news is that many privileged identities are still exposed as it is not enough just to secure human privileged identities." ... The security gap is not only increasing between the business and attackers, but also between the IT leaders and the business executives, according to Carson. "While in some industries this is improving, the issue still exists," he said. "Until we solve the challenge on how to communicate the importance of cybersecurity to the executive board and business, IT security decision-makers will continue to struggle to get the needed resources and budget to close the security gap." From Carson's perspective, that means there needs to be a change in the attuite at the C-suite level.


GraphQL is a big deal: Why isn’t it the industry standard for database querying?

What if you could leverage the expressive attributes of SQL and the flexibility of GraphQL at the same time? There are technologies available that claim to do that, but they are unlikely to become popular because they end up being awkward and complex. The awkwardness arises from attempting to force SQL constructs into GraphQL. But they are different query languages with different purposes. If developers have to learn how to do SQL constructs in GraphQL, they might as well use SQL and connect to the database directly. However, all is not lost. We believe GraphQL will become more expressive over time. There are proposals to make GraphQL more expressive. These may eventually become standards. But fundamentally, SQL and GraphQL have different world views, respectively: uniform backends vs. diverse backends, tables vs. hierarchical data, and universal querying vs. limited querying. Consequently, they serve different purposes. 


ESG: Building On Commitments On The E To Boost The S & The G

The first milestone could very well apply to enhancing data on anti-corruption. Challenges, of course, exist—corruption tends to be more political within organisations, and there can be hesitation to report on incidences of it. Measuring progress on reducing corruption is challenging, and indicators have to be carefully considered. For example, if the number of reported cases of crime increases in a given period, it could mean different things: anti-corruption mechanisms are working better and are well enough designed to identify corruption; people trust in the whistleblowing system and feel confident to report; or, indeed, corruption levels are going up. Nevertheless, academic scholarship and investment in anti-corruption are resulting in new indicators being developed (for example, the recently updated Index of Public Integrity [IPI] and the Transparency Index [T-Index] developed by Professor Alina Mungiu-Pippidi of the Hertie School). Collaboration with researchers and anti-corruption specialists could help design better data-collection methods.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - August 04, 2020

Ethical AI in healthcare

In many ways these technologies are going to be shaping us even before we've answered this question. We'll wake up one morning and realize that we have been shaped. But maybe there is an opportunity for each of us, in our own settings and in conversations with our colleagues and at the dinner table, and with society, more broadly, to ask the question, What are we really working toward? What would we be willing to give up in order to realize the benefits? And can we build some consensus around that?  How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and on the other, ensure that we're continuing to care? What would that world look like? How can we maintain the reason why we came into medicine in the first place, because we care about people, how can we ensure that we don't inadvertently lose that?  The optimistic view is that, by virtue of freeing up time by moving some tasks off of clinicians’ desks, and moving the clinician away from the screen, maybe we can create space, and sustain space for caring. The hope that is often articulated is that AI will free up time, potentially, for what really matters most. That's the aspiration. But the question we need to ask ourselves is, What would be the enabling conditions for that to be realized?


Apache Cassandra’s road to the cloud

What makes the goal of open sourcing cloud-native extensions to Cassandra is emergence of Kubernetes and related technologies. The fact that all of these technologies are open source and that Kubernetes has become the de facto standard for container orchestration has made it thinkable for herds of cats to converge, at least around a common API. And enterprises embracing the cloud has created demand for something to happen, now. A cloud-native special interest group has formed within the Apache Cassandra community and is still at the early stages of scoping out the task; this is not part of the official Apache project. at least yet. Of course, the Apache Cassandra community had to get its own house in order first. As Steven J. Vaughan-Nichols recounted in his exhaustive post, Apache Cassandra 4.0 is quite definitive, not only in its feature-completeness, but also in the thoroughness with which it has fleshed out the bugs to make it production-ready. Unlike previous dot zero versions, when Cassandra 4.0 goes GA, it will be production-ready. The 4.0 release hardens the platform with faster data streaming, not only to boost replication performance between clusters, but make failover more robust. But 4.0 stopped short about anything to do with Kubernetes.


From doorbells to nuclear reactors: why focus on IoT security

An important step in network security for IoT is identifying the company’s most essential activities and putting protections around them. For manufacturing companies, the production line is the key process. Essential machinery must be segmented from other parts of the company’s internet network such as marketing, sales and accounting. For most companies, just five to 10% of operations are critical. Segmenting these assets is vital for protecting strategic operations from attacks. One of the greatest risks of the connected world is that something quite trivial, such as a cheap IoT sensor embedded in a doorbell or a fish tank, could end up having a huge impact on a business if it gets into the wrong communication flow and becomes an entry point for a cyber attack. To address these risks, segmentation should be at the heart of every company’s connected strategy. That means defining the purpose of every device and object linked to a network and setting boundaries, so it only connects to parts of the network that help it serve that purpose. With 5G, a system known as Network Slicing helps create segmentation. Network Slicing separates mobile data into different streams. Each stream is isolated from the next, so watching video could occur on a separate stream to a voice connection.


The ABCs of Data Science Algorithms

An organization’s raw data is the cornerstone of any data science strategy. Companies who have previously invested in big data often benefit from a more flexible cloud or hybrid IT infrastructure that is ready to deliver on the promise of predictive models for better decision making. Big data is the invaluable foundation of a truly data-driven enterprise. In order to deploy AI solutions, companies should consider building a data lake -- a centralized repository that allows a business to store structured and unstructured data on a large scale -- before embarking on a digital transformation roadmap. To understand the fundamental importance of a solid infrastructure, let’s compare data to oil. In this scenario, data science serves as the refinery that turns raw data into valuable information for business. Other technologies -- business intelligence dashboards and reporting tools -- benefit from big data, but data science is the key to unleashing its true value. AI and machine learning algorithms reveal correlations and dependencies in business processes that would otherwise remain hidden in the organization’s collection of raw data. Ultimately, this actionable insight is like refined oil: It is the fuel that drives innovation, optimizing resources to make the business more efficient and profitable.


Soon, your brain will be connected to a computer. Can we stop hackers breaking in?

Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data. Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user's brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts. Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users' BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.


SaaS : The Dirty Secret No Tech Company Talks About

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Why ISO 56000 Innovation Management matters to CIOs

The ISO 56000 series presents a new framework for innovation, laying out the fundamentals, structures and support that ISO leaders say is needed within an enterprise to create and sustain innovation. More specifically, the series provides guidance for organizations to understand and respond to changing conditions, to pursue new opportunities and to apply the knowledge and creativity of people within the organization and in collaboration with external interested parties, said Alice de Casanove, chairwoman of the ISO 56000 standard series and innovation director at Airbus. ISO, which started work on these standards in 2013, started publishing its guidelines last year. The ISO 56002 guide for Innovation management system and ISO 56003 Tools and methods for innovation partnership were published in 2019. ISO released its Innovation management -- Fundamentals and vocabulary in February 2020. Four additional parts of the series are forthcoming. The committee developed the innovation standards so that they'd be applicable to organizations of all types and sizes, de Casanove said. "All leaders want to move from serendipity to a structured approach to innovation management," she explained


How plans to automate coding could mean big changes ahead

Known as a "code similarity system", the principle that underpins MISIM is not new: technologies that try to determine whether a piece of code is similar to another one already exist, and are widely used by developers to gain insights from other existing programs. Facebook, for instance, uses a code recommendation system called Aroma, which, much like auto-text, recommends extensions for a snippet of code already written by engineers – based on the assumption that programmers often write code that is similar to that which has already been written. But most existing systems focus on how code is written in order to establish similarities with other programs. MISIM, on the other hand, looks at what a snippet of code intends to do, regardless of the way it is designed. This means that even if different languages, data structures and algorithms are used to perform the same computation, MISIM can still establish similarity. The tool uses a new technology called context-aware semantic structure (CASS), which lets MISIM interpret code at a higher level – not just a program's structure, but also its intent. When it is presented with code, the algorithm translates it in a form that represents what the software does, rather than how it is written; MISIM then compares the outcome it has found for the code to that of millions of other programs taken from online repositories.


RPA bots: Messy tech that might upend the software business

Where it gets interesting is that these RPA bots are basically building the infrastructure for all the other pieces to fit together such as AI, CRM, ERP and even documents. They believe in the long-heralded walled-garden approach in which enterprises choose one best-of-breed infrastructure platform like Salesforce, SAP or Oracle and build everything on top of that. History has shown that messy sometimes makes more sense. The internet did not develop from something clean and organized -- it flourished on top of TCP: a messy, inefficient and bloated protocol. Indeed, back in the early days of the internet, telecom engineers were working on an organized protocol stack called open systems interconnection that was engineered to be highly efficient. But then TCP came along as the inelegant alternative that happened to work and, more important, made it possible to add new devices that no one had planned on in the beginning. Automation Anywhere's CTO Prince Kohli said other kinds of messy technologies have followed the same path. After TCP, HTTP came along to provide a lingua franca for building web pages. Then, web developers started using it to connect applications using JavaScript object notation. 



Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang