Showing posts with label SOA. Show all posts
Showing posts with label SOA. Show all posts

Daily Tech Digest - April 07, 2019

Can you teach humor to an AI?


“Artificial intelligence will never get jokes like humans do,” states Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. “In themselves, they have no need for humor. They miss completely context.” he adds. “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,” Tristan Miller, computer scientist and linguist at Darmstadt University of Technology Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany elaborates on the complexity for machines to process context: “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,”. Miller has analyzed more than 10,000 word plays and found it quite challenging. “It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from.” he concludes.



Security flaws in banking apps expose data and source code


Exposed source code, sensitive data, access to backend services via APIs and more have been uncovered after a researcher downloaded various financial apps from the Google Play store and found that it took, on average, just eight and a half minutes before they were reading the code. Vulnerabilities including lack of binary protections, insecure data storage, unintended data leakage, weak encryption and more were found in banking, credit card and mobile payments apps and are detailed a report by cybersecurity company Arxan: In Plain Sight: The Vulnerability Epidemic in Financial Mobile Apps. "There's clearly a systemic issue here – it's not just one company, it's 30 companies and it's across multiple financial services verticals," Alissa Knight, cybersecurity analyst at global research and advisory firm Aite Group and the researcher behind the study, told ZDNet. The vast majority – 97 percent of the apps tested – were found to lack binary code protections, making it possible to reverse engineer or decompile the apps exposing source code to analysis and tampering.


Why blockchain (might be) coming to an IoT implementation near you

Chains of binary data.
Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes - which can be largely anything with a CPU in it - communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain. The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri. That’s a powerful security technique - absent a bad actor successfully controlling all of the nodes on a given blockchain, the data protected by that blockchain can’t be falsified or otherwise fiddled with.


Researchers developed algorithms that mimic the human brain (and the results don’t suck)

Researchers developed algorithms that mimic the human brain (and the results donĂ¢€™t suck)
Krotov and Hopfield’s work maintains the simplicity of the old school studies, but represents a novel step forward in brain-emulating neural networks. TNW spoke with Krotov who told us: If we talk about real neurobiology, there are many important details of how it works: complicated biophysical mechanisms of neurotransmitter dynamics at synaptic junctions, existence of more than one type of cells, details of spiking activities of those cells, etc. In our work, we ignore most of these details. Instead, we adopt one principle that is known to exist in the biological neural networks: the idea of locality. Neurons interact with each other only in pairs. In other words, our model is not an implementation of real biology, and in fact it is very far from the real biology, but rather it is a mathematical abstraction of biology to a single mathematical concept – locality. Modern deep learning methods often rely on a training technique called backpropagation, something that simply wouldn’t work in the human brain because it relies on non-local data.


Self-Service Delivery


Self-Service Delivery is an approach that makes the tools necessary to develop and deliver applications available via self-service. It makes the actions we need to take as developers — starting, developing and shipping software — available as user-accessible tools, so that we can work at our own speed without getting blocked. By making actions automated and accessible, it's easier to standardize configurations and practices across teams. We need specific building blocks to enable Self-Service Delivery. The same principle at the heart of your favorite framework applies to delivery. If we think of delivery phases in framework terms, each phase has a default implementation, which can be overridden. For example, if the convention is that Node projects in my team are built by running npm test, then I include a test script in my project. I don't write the code that runs the script, nor tell my build tool explicitly to do so. The same is true for other phases of delivery.


Artificial intelligence can now emulate human behaviors – soon it will be dangerously good

Robot
At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google's Bach composer made some mistakes an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths, a music interval that Bach studiously avoided. The app also broke musical rules of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI's text-generating program occasionally wrote phrases like "fires happening under water" that made no sense in their contexts. As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine. Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress.


The Race For Data And The Cybersecurity Challenges This Creates

uncaptioned image
High-tech today needs to be doing the exact same thing, with an emphasis on cybersecurity problems. Rather than sending devices and apps into the connected ecosystem willy-nilly, we need to fully understand what could happen when we do. How many people could be impacted? How many companies? What are the financial losses that could be sustained? What about losses to brand/image? In other words: do we really understand the implications of what we are creating here? These questions, if well researched, should be enough to slow down time-to-market and eventually stop breaking so many things. This should be performed both at the development stage in every company and the adoption stage. Companies creating products have a responsibility to their customers to ensure safety and they can’t do that if they don’t fully take everything into account. On the other end of the spectrum, CIOs, CTOs, and anyone responsible for buying and adopting new tech in your business needs to perform the same sort of analysis. Don’t just buy tech for tech’s sake.


Serverless computing growth softens, at least for now

Plans or intentions for serverless implementations have slipped as well, the Cloud Foundry survey also shows. Currently, 36 percent report evaluating serverless, compared to 42 percent in the previous survey.  Some of this may be attributable to the statistical aberrations that occur within surveys that are conducted within months of one another -- don't be surprised if the numbers pop again in the fall survey. Diving deeper into the adoption and planned adoption numbers, the survey's authors point out that within organizations embracing serverless architecture, usage is actually proliferating. For users and evaluators, 18 percent say they are broadly deploying serverless across their entire company, double the percentage (9 percent) who said that only one year ago.  Still, it is telling that there is some degree of caution being exercised when moving to serverless architecture. What's behind the caution?


Vulnerability Management: 3 Questions That Will Help Prioritize Patching


There is usually a significant delta between intended network segmentation and access rights, and what actually exists. Credentials and connections that introduce risk get set up in a variety of ways. We call this actual connectivity the “access footprint.” Throughout the normal work day, users connect and disconnect from various systems and applications, leaving behind cached credentials and potential “live” connections. The access footprint changes constantly. Some risky conditions are fleeting; others can persist for a very long time. But even if these conditions are short-lived, an attacker situated in the right place at the right time (“right” for them, wrong for you!) has plenty to work with. A new report published by CrowdStrike underscores the importance of proactively hardening the network against lateral movement. It’s a vitally important complement to traditional vulnerability management.


The Difference Between Microservices and Web Services


Microservices architecture involves breaking down a software application into its smaller components, rather than just having one large software application. Typically, this involves splitting up a software application into smaller, distinct business capabilities. These can then talk to each other via an interface. ... So, if microservices are like mini-applications that can talk to each other, then what are web services? Well, they are also mini-applications that can talk to each other, but over a network, in a defined format. They allow one piece of software to get input from another piece of software, or provide output, over a network. This is performed via a defined interface and language, such as XML. If you’re running on a network where your software components or services won’t be co-located, or you want the option of running them in separate locations in the future then you will likely need to use web services in some form.



Quote for the day:


"No amount of learning can cure stupidity and formal education positively fortifies it." -- Stephen Vizinczey


Daily Tech Digest - October 20, 2018


Habits, it seems, get in the way of change despite our best intentions. “Habits are triggered without awareness — they are repeated actions that worked in the past in a given context or in a similar experience,” she notes. Wood’s research shows that concentrating on changing unwanted behaviors, and then creating new ones — not focusing on motivation — is the key to making change. She cites various efforts aimed at changing smoking habits in the U.S. from 1952 to 1999. Smoking decreased not when smokers were made aware of the health risks, but when buying and smoking cigarettes was made more difficult and less rewarding. Thus, higher taxes, smoking bans in public places, and limits on point-of-purchase ads — which add friction to smoking — were a more effective deterrent than warning labels on cigarette packages and public service advertising about smoking’s negative effects. A similar strategy of changing the context is possible in the workplace: Make old actions more difficult; make new, desired actions easier and more rewarding.


7 Ways A Collaboration System Could Wreck Your IT Security


Before an IT group blithely answers the call for a collaboration system – by which we mean groupware applications such as Slack, Microsoft Team, and Webex Team – it's important to consider the security risks these systems may bring. That's because the same traits that make these, and similar, applications so useful for team communications also make them vulnerable to a number of different security issues. From their flexibility for working with third-party applications, to the ease with which team members can sign in and share data, low transactional friction can easily translate to low barriers for hackers to clear. When selecting and deploying collaboration tools, an IT staff should be on the lookout for a number of first-line issues and be prepared to deal with them in system architecture, add-ons, or deployment. The key is to make sure that the benefits of collaboration outweigh the risks that can enter the enterprise alongside the software.


Apache Kafka: Ten Best Practices to Optimize Your Deployment


A running Apache ZooKeeper cluster is a key dependency for running Kafka. But when using ZooKeeper alongside Kafka, there are some important best practices to keep in mind. The number of ZooKeeper nodes should be maxed at five. One node is suitable for a dev environment, and three nodes are enough for most production Kafka clusters. While a large Kafka deployment may call for five ZooKeeper nodes to reduce latency, the load placed on nodes must be taken into consideration. With seven or more nodes synced and handling requests, the load becomes immense and performance might take a noticeable hit. Also note that recent versions of Kafka place a much lower load on Zookeeper than earlier versions, which used Zookeeper to store consumer offsets. Finally, as is true with Kafka’s hardware needs, provide ZooKeeper with the strongest network bandwidth possible. Using the best disks, storing logs separately, isolating the ZooKeeper process, and disabling swaps will also reduce latency.


The Evolution of Mobile Malware


Mobile malware isn’t just an opportunistic tactic for cybercriminals. Kaspersky Lab is also seeing its use as part of targeted, prolonged campaigns that can affect many victims. One of the most notable discoveries this year was Skygofree. It is one of the most advanced mobile implants that Kaspersky Lab has ever seen. It has been active since 2014, and was designed for targeted cyber-surveillance. It is spread through web pages, mimicking leading mobile network operators. This was high-end mobile malware that is very difficult to identify and block, and the developers behind Skygofree have clearly used this to their advantage: creating and evolving an implant that can spy extensively on targets without arousing suspicion. ... In recent times, rooting malware has been the biggest threat to Android users. These Trojans are difficult to detect, boast an array of capabilities, and have been very popular among cybercriminals. Once an attacker has root access, the door is open to do almost anything.


What is the CMO's Technology Strategy for 2019 and Beyond?

Two iPhones in someone's hand. One of the left says, "Technology is a given" on the screen, the one of the right says, "Not a debate" on the screen
Even the CMOs that don’t have the technological background are becoming more tech savvy. Integrate CMO Vaughan said he considers himself and his colleague marketers technology investors, trying to manage a portfolio of tech to provide efficiency, effectiveness and unique capabilities for the company. “We view technology as an enabler of our strategy and an important part of advancing our marketing capabilities,” Vaughan said. “We have tried to be very disciplined about not buying tech for tech sake, which is not always easy to do today with so many options. We start with the strategy, what we are trying to accomplish and build a roadmap, including ROI and an adoption plan and model for each technology we evaluate.” Vaughan said CMOs should know what is available and at their disposal to differentiate and accelerate their strategy. “This does not mean you have to be a technology expert,” he said.


Privacy, Data, and the Consumer: What US Thinks About Sharing Data

To prevent data being lost or stolen is the most obvious “table stake” for consumers. Just as important is the question of whether marketers should have it in the first place. This links clearly to the likes of GDPR in Europe where the bar has been raised for all organizations around justification of the data they hold. But if we have the right data, for the right reasons, if we keep it safe and if we can make it more transparent how we’re using that data to provide a more respectful, personalized, fairer and rewarding service to the consumer, the trust will grow. Equally, we need to trust the consumer, again by providing transparent access to the data we hold, clarity around how we use it and the ability for them to control their data. Overall, the research shows that while consumers are rightly concerned about data privacy, they are also aware that data is an essential part of today’s economy, with 57% on average, globally, agreeing or strongly agreeing. Factor in the neutrals and around two-thirds of consumers are accepting or neutral around data use in today’s data-driven, data-enabled world.


NHS standards framework aims to set the bar for quality and efficiency


Although most of the standards in the framework aren’t necessarily new, they are “intended to be a clear articulation of what matters the most in our standards agenda, and is accompanied by a renewed commitment to their implementation,” said NHS Digital CEO Sarah Wilkinson in the framework’s foreword. Speaking at the UK Health Show on 25 September, Wilkinson said the potential for use of data in the NHS is huge, but the health service needs to get to grips with standards to reap the benefits.  Most of the standards in the framework, which is currently in beta form and out for consultation, are based in international ones, however some are specialised for the NHS. This includes using the NHS number as a primary identifier – a standard which has been in place for a long time, but has had mixed results in uptake. The framework said the standard “is live now and should be adhered to in full immediately”. 


Open Banking has arrived, whether you like it or not

Australia has introduced Open Banking rules that will force the banks to share data with trusted Third-Party Providers (TPPs) by June 2019; Mexico has introduced a Fintech Law; South Korea and Singapore have enforced rules around financial data sharing between banks and third parties; and the USA has seen several banks innovating around open financial structures, although there is no law enforcing them to do this, yet. What intrigues me about the market movements is that some large financial players are taking a lead in this space, such as Citibank and Deutsche Bank’s open API markets, whilst some are resisting the change. I have heard several reports in the UK that the large banks have made data sharing incredibly difficult for the customer, by making the permissioning process very onerous and time-consuming. Equally, the implementation of European rules under PSD2 has seen several Fintech firms cry foul, as each bank creates its own interpretation, and therefore API interface, of the law.


How Data Changed the World


Running a city is always a challenging task. With Big Data, however, comes new opportunities alongside new challenges. Instead of having to rely on surveys and manually tracking how people move throughout an area, cities can instead rely on sensor-derived data, providing far greater resolution and a pool of data to draw from orders of magnitude larger than ever before available. Many of these advances may seem a bit mundane at first; developing improved traffic routes, for example, is unlikely to garner many headlines. However, these changes lead to concrete improvements, saving travelers time and improving overall quality of life. Furthermore, Big Data-derived improvements can inform city planners when deciding which direction their cities will take in the future. Before launching large and expensive projects, city managers will be able to look at information gleaned from Big Data to determine what the long-term effects will be, potential changing cities in fundamental ways.


Give REST a Rest with RSocket


An often-cited reason to use REST is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue. JSON text is only human readable because there are tools that allow you to read it – otherwise it’s just bytes on a wire. Furthermore, half the time the data being sent around is either compressed or encrypted — both of which aren’t human readable. Besides, how much of this can a person “debug” by reading? If you have a service that averages a tiny 10 requests per second with a 1 kilobyte JSON that is the equivalent to 860 megabytes of data a day, or 250 copies of War and Peace every day. There is no one who can read that, so you’re just wasting money. Then, there is the case where you need to send binary data around, or you want to use a binary format instead of JSON. To do that, you must Base64 encode the data. This means that you essentially serialize the data twice — again, not an efficient way to use modern hardware.



Quote for the day:


"Managers maintain an efficient status quo while leaders attack the status quo to create something new." -- Orrin Woodward


Daily Tech Digest - June 25, 2018

What Is A Zero-Day Exloit? A Powerful But Fragile Weapon

skull and crossbones in binary code
A zero-day is a security flaw that has not yet been patched by the vendor and can be exploited and turned into a powerful but fragile weapon. Governments discover, purchase, and use zero-days for military, intelligence and law enforcement purposes — a controversial practice, as it leaves society defenseless against other attackers who discover the same vulnerability. Zero-days command high prices on the black market, but bug bounties aim to encourage discovery and reporting of security flaws to the vendor. The patching crisis means zero-days are becoming less important, and so-called 0ld-days become almost as effective. A zero-day gets its name from the number of days that a patch has existed for the flaw: zero. Once the vendor announces a security patch, the bug is no longer a zero-day (or "oh-day" as the cool kids like to say). After that the security flaw joins the ranks of endless legions of patchable but unpatched old days. In the past, say ten years ago, a single zero-day might have been enough for remote pwnage. This made discovery and possession of any given zero-day extremely powerful.



Address network scalability requirements when selecting SD-WAN


Calculating scalability based on the number of sites can be trickier. Not only do scalability requirements include provisioning sufficient bandwidth for all your sites, but network architecture matters when considering the scale needed to support a large number of branches. Some SD-WAN offerings are designed to spin up a virtual pipe from every site to every other site and maintain it perpetually. That option puts a large burden of VPN management on the service, which grows exponentially with the number of sites. Other SD-WAN services may also depend on VPNs, but without the need to have each VPN on constantly. For example, the service might allow customers to precalculate some of the necessary operating parameters for the VPNs and instantiate them only when needed for a network session. This option can support far more nodes than the previous design. Still, other SD-WAN products take a different approach entirely, without big VPN meshes. These employ architectures where the work of supporting the N+1st site is the same as the work of supporting the second site. This design could support even more nodes.


Ex-Treasury Official Touts the Promise of Fintech ‘Sandboxes’

As it stands now, “there’s nothing that calls itself a sandbox” in the U.S., Crane said. But comments by a Treasury official on Thursday at a SIFMA event about an upcoming report by Treasury on such reg tools signals promise of movement. What exactly is a “regulatory sandbox”? As the RegTechLab report explains, it’s a new tool allowing companies “to test new products, services, delivery channels or business models in a live environment, subject to appropriate conditions and safeguards.” Regulators, the report continues, “have also taken other proactive steps to engage with industry directly, and in some cases pursued mechanisms less formal than sandboxes to facilitate testing or piloting of new innovations.” Craig Phillips, counselor to the Treasury secretary, weighed in on Thursday that the financial services landscape “has over 3,300 new fintech companies” and “over 20% of all personal loans” originate in the fintech marketplace. “We need a new approach by regulators that permits experimentation for services and processes,” said Phillips, adding that it could include regulatory sandboxes, aka innovation facilitators.


Adapting to the rise of the holistic application


A shift in mindset is needed. McFadin says it is much harder to call a project “done”, as each element can be changed or updated at any time. While the services can be more flexible, it is necessary to think differently about the role of software developers. Companies that have implemented agile development properly should be equipped to manage this change more effectively. However, those that just namecheck agile, or don’t engage in the process fully, may well struggle. Eric Mizell, vice-president of solution engineering at software analytics company OverOps, claims the new composable, containerised, compartmentalised world of software is creating headaches for those tasked with the challenge of maintaining the reliability of these complex applications. “Even within the context of monolithic applications, our dependence on 30-year old technology, such as logging frameworks to identify functional issues in production code, is sub-standard at best – within the context of microservices and holistic applications, it is nearly worthless,” says Mizell.


Blockchain Watchers Say Decentralized Apps Are Around The Corner

More than a decade ago, Apple had to deal with that perennial chicken-and-egg problem: finding killer apps that made people to want to buy an iPhone. Developers building apps on blockchain technology face the same dilemma. Not enough people are using browsers and tokens that run on a blockchain network, so it’s hard to amass the number of users needed to propel a new app to success. But that hasn’t stopped people from trying or researchers from divining that decentralized apps, or “dapps,” really are just around the corner. One recent report from Juniper Research, a market intelligence firm in the U.K., states that in the coming year we’ll see a “significant expansion” in the deployment of dapps built on blockchain technology. Regular iPhone and Android users should be able to download a dapp on their smartphone “by the end of the year,” Juniper's head of forecasting, Windsor Holden toldForbes, adding that the dapps most likely to first gain mass adoption would deal with verifying identity or tracking the provenance of products or food in the supply chain.


IoT could be the killer app for blockchain

abstract blockchain representation of blocks and nodes
The rise of edge computing is critical in scaling up tech deployments, owing to reduced bandwidth requirements, faster application response times and improvements in data security, according to Juniper research. Blockchain experts from IEEE believe that when blockchain and IoT are combined they can actually transform vertical industries. While financial services and insurance companies are currently at the forefront of blockchain development and deployment, transportation, government and utilities sectors are now engaging more due to the heavy focus on process efficiency, supply chain and logistics opportunities, said David Furlonger, a Gartner vice president and research fellow. For example, pharaceuticals are required by law to be shipped and stored in temperature-controlled conditions, and data about that process is required for regulatory compliance. The process for tracking drug shipments, however, is highly fragmented. Many pharmaceutical companies pay supply chain aggregators to collect the data along the journey to meet the regulatory standards.


Serverless Native Java Functions using GraalVM and Fn Project

The Fn Project is an open-source container-native serverless platform that you can run anywhere — any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. It is an evolution of the IronFunctions project from iron.io and is mainly maintained by Oracle. So you can expect enterprise grade solution, like first class support for building and testing. It basically leverages of the container technology to run and you can get started very quickly, the only prerequisite is Docker installed. ... Java is often blamed as being heavy and not suitable for running as serverless function. That fame does not come from nothing, it sometimes needs a full JRE to run with slow startup time and high memory consumption compared to other native executables like Go. Fortunately this isn't true anymore, with new versions of Java you can create modular applications, compile Ahead-of-Time and have new and improved garbage collectors using both OpenJDK and OpenJ9 implementations. GraalVM is new flavor that delivers a JVM that supports multiple languages and compilation into native executable or shared library.


Data Science for Startups: Deep Learning


Deep learning provides an elegant solution to handling these types of problems, where instead of writing a custom likelihood function and optimizer, you can explore different built-in and custom loss functions that can be used with the different optimizers provided. This post will show how to write custom loss functions in Python when using Keras, and show how using different approaches can be beneficial for different types of data sets. I’ll first present a classification example using Keras, and then show how to use custom loss functions for regression. The image below is a preview of what I’ll cover in this post. It shows the training history of four different Keras models trained on the Boston housing prices data set. Each of the models use different loss functions, but are evaluated on the same performance metric, mean absolute error. For the original data set, the custom loss functions do not improve the performance of the model, but on a modified data set, the results are more promising.


REST API Error Handling — Problem Details Response

RFC 7807 defines a "problem detail" as a way to carry machine-readable details of errors in an HTTP response to avoid the need to define new error response formats for HTTP APIs. By providing more specific machine-readable messages with an error response, the API clients can react to errors more effectively, and eventually, it makes the API services much more reliable from the REST API testing perspective and the clients as well. In general, the goal of error responses is to create a source of information to not only inform the user of a problem but of the solution to that problem as well. Simply stating a problem does nothing to fix it — and the same is true of API failures. RFC 7807 provides a standard format for returning problem details from HTTP APIs. ... The advantages of using this can be a unification of the interfaces, making the APIs easier to build, test and maintain. Also, I think that more advantages will come in the future when more and more API providers will adjust to this standard.


Protecting IoT components from being physically compromised


Disruption of these industrial devices can cause catastrophic events in an international scale, hence the importance to implement security solutions in front of a variety of attack vectors. The sole purpose is to prevent the intrusion of unauthorized (external or internal) actors and avoid disruption of critical control processes. This is not a theory but rather a disturbing fact. In 2017, a group of researchers from Georgia Tech developed a worm named "LogicLocker" that caused several PLC models to transmit incorrect data to the systems they control and as a result led to harmful implications. The common security methods of industrial networks are based mainly on the integration of dedicated network devices which are connected to the traffic artery at central junctions (usually next to network switches). This security method sniffs the data flow between the PLCs themselves, between the PLCs and the cloud (public or private) and between the user interface (HMI) and the cloud.



Quote for the day:


"Always and never are two words you should always remember never to use." -- Wendell Johnson


Daily Tech Digest - June 24, 2018

Walking With AI: How to Spot, Store and Clean the Data You Need

Walking With AI: How to Spot, Store and Clean the Data You Need
Machine learning initiatives are as diverse as companies themselves. Think critically about what sort of examples you need to train your algorithm on in order for it to make predictions or recommendations. For example, an online baby registry we partnered with wanted to project the lifetime value of customers within days of signup. Fortunately for us, it had proactively logged transaction data, including items customers added to their registries, where they were added and when they purchased. Furthermore, the client had logged the entire event stream, rather than just the current state of each registry, to maintain a database record. The client also brought us web and mobile event stream data. Through Heap Analytics, it had logged the type of device and browser used by each registrant into its transactional database. Using UTM codes, the registry company had even gathered attribution data, something collected for all or most marketing activities by just 51 percent of North American respondents to a 2017 AdRoll survey.



The SOA Journey: From Understanding Business to Agile Architecture


If the monolith ceased to implement its responsibilities in such a way that it satisfies business, if the development pace slows down, then something definitely needs to be done to fix this. But before that, apparently, you need to find a reason why is that so. In my experience, the reason is always the same: tight coupling and low cohesion. If your system belongs to a single bounded context, if it’s not big enough (yeah, sounds ambiguous, I’ll elaborate on this later) then all you have to do to fix things up is to decompose your system into modules the right way. Otherwise, you need to introduce way more autonomous and isolated concept that I can call a service. This term is probably one of the most overloaded one in the whole software industry, so let me clarify what I mean. I’ll give more strict definition further, but for now, I want to point out that, first of all, service has logical boundaries, not physical. It can contain any number of physical servers which can contain both the backend code and UI data. There can be any number of databases inside those services, and they all can have different schemas.


The Convergence of Digitalization and Sustainability

The promise of digitalization — big data, artificial intelligence, the internet of things, cybersecurity, and more — is often described with hyperbole. Pundits and academics alike have described “big data” as the “new oil,” “the new soil,” and the primary driver of a “management revolution,” the “Fourth Industrial Revolution,” and a “second machine age.” Artificial intelligence is receiving similar hype, with AI being compared to the rise of electricity during the Industrial Revolution. Russian President Vladimir Putin says whatever country controls AI will become the “ruler of the world.” What’s more, renowned scientist Stephen Hawking warns that development of full AI could spell the end of the human race.” There is similar hype around sustainability, albeit of a different flavor. “Sustainability is the primary moral and economic imperative of the 21st century,” says Mervyn King, former governor of the Bank of England. “It is one of the most important sources of both opportunities and risks for businesses. Nature, society, and business are interconnected in complex ways that should be understood by decision-makers.”


Differentiation through innovation: Banks pick fintech firms over bigtech

fintech
Big tech companies are seeing greater competition from fintech companies when it comes to providing banking solutions, say experts. "Businesses have started using Fintechs to solve many of the pain points in the banking value chain by doing smaller outcome based projects, instead of signing up large long term deals with Bigtechs, said Sachin Seth, Partner and Fintech Leader, Advisory Services, EY India. ... “Large IT companies still manage the core engines for the bank, they understand the bank’s security and regulatory requirements and have tailored their systems to suit these needs over the years. Fintech companies too, as the business case, grows need to invest in these areas. The successful ones will eventually become mid- to large-sized companies, while hopefully retaining their innovation DNA, said Axis’ Anand. While the competition large IT companies are seeing from fintech start-ups will only get fiercer, banking industry experts said that there is a strong need for collaboration. “Fintechs are nimble companies that think innovation first. However, they are not as well equipped to deploy the products. Fintech companies can drive innovation, but the comercialisation is better managed by bigtech companies,” said BoB’s Handa.


The 4 phases of digital transformation: a roadmap to Intelligent Automation

You’ve reached the end the road in outsourcing. You’ve been dinged by potholes of legacy systems and your smartest people are too busy struggling under the load of paperwork. You suspect that there’s only one way to get past these roadblocks, and that’s to start a whole new journey. Next stop: Intelligent Automation. The only thing is that you have no idea of what you’ll encounter along the way… The good news is, there are people who do. WorkFusion’s Client Strategy and Transformation team, which focuses on strategic advice and programmatic enablement for enterprises who are embarking on robotic process automation initiatives, has been down this road and around the block a few times already. They have seen patterns emerge and learned from their experiences. Which is why they wrote The 4 Phases of Digital Transformation: The Intelligent Automation Maturity Model. This complimentary 10-page eBook by WorkFusion will help you determine the best strategy for your operation by mapping each of the four stages of maturity that are relevant for most organizations.


The Brilliant Ways UPS Uses Artificial Intelligence, Machine Learning And Big Data


UPS developed its chatbot, UPS Bot, in house and released it for use just three months after the idea was born. This AI-enabled tool mimics human conversation and can respond to customer queries such as “Where is the nearest UPS location?” and can track packages and give out shipping rates. Customers can ask the bot questions either through text or voice commands through mobile devices, social media channels and virtual assistants such as Alexa and Google Assistant. The UPS Bot is able to recognize these requests and then takes the appropriate steps to complete them. The more “conversations” the bot has, the more learning it experiences to take the appropriate action in the future. During its peak period, UPS provided more than 137 million UPS My Choice alerts—the free system that lets residential customers decide “how, where and when home deliveries occur.” The chatbot is integrated with the UPS My Choice system, so customers are able to obtain information about their incoming packages and deliveries without providing a tracking number.


How Machine Learning Is Changing the World -- and Your Everyday Life

How Machine Learning Is Changing the World -- and Your Everyday Life
Computers can be programmed to determine individual study plans, specific to each student's needs. Algorithms can analyze test results, drastically reducing the time teachers spend in their leisure time on grading. A student's attendance and academic history can help determine gaps in knowledge and learning disabilities. These applications won't necessarily translate to a teacher-less classroom, but will facilitate the teaching and learning environments to enhance the outcomes and ease the burden on both teacher and student. Legal firms are increasingly turning to machine learning to process massive amounts of data related to legal precedents. J.P. Morgan, for example, uses a software program dubbed COIN to review documents and previous cases in seconds that would otherwise take 360,000 hours. As with our teachers above, it's unlikely machine learning or AI will replace lawyers any time soon, given the necessity of rebuttal and human logic / appeal, but the incorporation of machine learning will surely reduce the time taken to put together a case, and it could expedite trials, speeding up the processes of the court.


How BuzzFeed Migrated from a Perl Monolith to Go and Python Microservices


The new microservices are developed using Python as the main language with Go for the more performance sensitive components. BuzzFeed’s engineering team have found that the two languages are very complementary and it is relatively straightforward for individual developers to switch from one to the other as appropriate. At the time of writing they have around 500 microservices in stage and production environments on AWS. They break-down their services using something that sounds somewhat similar to SCS; the home page on buzzfeed.com is one service, news pages are a separate service, as are author pages and so on. One challenge the team faced was with routing requests to the correct backend applications. Fastly, their CDN provider, has the ability to programmatically define behavioural logic at the edge using a C based programming language called VCL, and initially the engineering team were writing all their routing logic in VCL directly. However, they found that as the configuration became more and more complex so making changes became more difficult, and being able to adequately test their configuration much more important. Mark McDonnell, a Staff Software Engineer at BuzzFeed, told InfoQ that


Serverless development with Node.js, AWS Lambda and MongoDB Atlas

The developer landscape has dramatically changed in recent years. It used to be fairly common for us developers to run all of our tools (databases, web servers, development IDEs…) on our own machines, but cloud services such as GitHub, MongoDB Atlas and AWS Lambda are drastically changing the game. They make it increasingly easier for developers to write and run code anywhere and on any device with no (or very few) dependencies. A few years ago, if you crashed your machine, lost it or simply ran out of power, it would have probably taken you a few days before you got a new machine back up and running with everything you need properly set up and configured the way it previously was. With developer tools in the cloud, you can now switch from one laptop to another with minimal disruption. However, it doesn’t mean everything is rosy. Writing and debugging code in the cloud is still challenging; as developers, we know that having a local development environment, although more lightweight, is still very valuable.


Focus More On Conceptual Knowledge To Be A Successful Data Scientist

The trend is obviously increasing with many recruiting senior management positions in analytics. Having said that, it is still behind western countries. For example, In 2016 MIT Sloan management review reported that 54 percent of Fortune 1000 companies had Chief Data Office, but the corresponding number in India is much lower. This may be due to the fact that the number of analytics projects in India is still lower compared to western markets. However, with the government policies to use AI in many government initiatives, this could change. At a lower level, it is business intelligence skills such as reporting, dashboard creation. This skill still forms the majority of recruiting by the Indian companies. At the higher level of AI, it is natural language processing (NLP) and other forms of unstructured data analysis such as image processing using deep learning algorithms lead the hiring trend. Data Strategy Officers becoming common among many companies.



Quote for the day:


"The art of communication is the language of leadership." -- James Humes


Daily Tech Digest - November 10, 2017

The tooling is critical. If you have a solid, well tested pipeline with code reviews which includes infrastructure code, then you are already ticking a lot of the boxes and can iterate faster. This means you can be more secure by responding faster to issues. Sharing ownership of DEV/QA with Operations and Dev teams means any concerns on security or performance happen faster, and you expose Operations teams to the challenges faced by Engineering when environments are different. The tool chain now available means it’s easier to share and these are significant improvements for compliance, particularly if automation means little to no production access. Why would you need it if logging and instrumentation give you all the insight you need? In a container world the notion of RDP or SSH to systems doesn’t make sense anymore unless you’re dealing with state and data where things can get a little more complex.


Transitioning to the role of CISO: Dr. Alissa Johnson

One is that there are a lot of instances where we allowed the culture to drive the security governance, and, a lot of the time, we found ourselves behind the adversary. You have to let security governance drive things -- for example, with multifactor authentication. There may be a better way of doing that, but when we let the culture in a company or agency drive security governance or innovation, that's a problem. The second thing that I learned was that there really isn't a lot of difference between there and here. ... Xerox has no nuclear secrets, but hackers are still attacking us and trying to get data using the same tools and technology. What they want to get is different, but how they get it is the same. All organizations have unique aspects, but when you peel it back and look at the way the attackers come in, [it] is largely the same.


Why Europe’s GDPR privacy regulation is good for business


Organisations need to look after their information assets with the utmost care because they are responsible for its safe keeping as custodians. GDPR is a great reminder to businesses that people lend their information and organisations have a responsibility to look after it. It’s not just about confidentiality, it’s about integrity, accuracy and availability – and it’s just plain good business practice. If you’re managing customer information in a fit and proper way, then requests for that information – known as subject access requests – are nothing to fear. GDPR is expected to lead to a significant increase in consumers submitting subject access requests, which require businesses to disclose copies of the data they hold on individuals. If a company has done all the right work, finding and disclosing information for a subject access request will be easy to do, and there should be a streamlined approach in place for this.


Will human drivers always be the weak link when sharing the road with autonomous vehicles?

If all cars on the road were autonomous, accidents would decline, Ramsey told TechRepublic after the Uber accident. "While they are mixed together, the inflexibility of computers may lead to accidents that wouldn't have happened before even as some other accidents are prevented," he said. In May 2016, a Tesla driver was killed in an accident while the car was operating in its semi-autonomous Autopilot mode. A US Department of Transportation investigation did not identify any defects in design or performance of the Autopilot system. According to data released by Tesla during the investigation, Autopilot has lowered the number of crashes among its drivers by 40%. It remains to be seen if these accidents will hinder self-driving efforts moving forward.


Four Strategies for Cultivating Strong Leaders Internally

“In industry, 95 percent of your time is spent operating on the thing that you’re currently engaged in,” Banks says. “In the military, even if you’re in the midst of combat operation, you will still conduct these training exercises to continue building capacity. Imagine if a company was in the midst of delivering goods and services to its customers. Yet it still created some scenarios—like, what would HR have to do in order to merge systems associated with an acquisition?—and ran through them via a short-duration exercise while also meeting its external obligations.” Some businesses have begun to latch onto this idea, creating innovation incubators that let them experiment in real time, or even sending employees to immersive, multiple-day business simulations. Banks expects more organizations will soon follow suit.


How Law Firms Can Make Information Security a Higher Priority

There are now several prominent examples of how things can go wrong. Earlier this year, global law firm DLA Piper was hit by a strain of ransomware that forced management to shut down its offices for several days while IT dealt with the problem. In 2016, a breach referred to as the Panama Papers entailed a massive document disclosure of 2.6 terabytes of data from Panamanian-based law firm Mossack Fonseca. German newspaper SĂ¼ddeutsche Zeitung got hold of the documents, resulting in coverage of celebrities' and politicians' financial transactions and other personal details.  If events like these have a silver lining, it is the possibility that other firms might learn from them in hopes of avoiding the same fate. Here are four best practices law firms should consider as they seek to make information security a higher priority:


Google: Our hunt for hackers reveals phishing is far deadlier than data breaches

Despite the huge numbers, only seven percent of credentials exposed in data breaches match the password currently being used by its billion Gmail users, whereas a quarter of 3.8 million credentials exposed in phishing attacks match the current Google password. The study finds that victims of phishing are 400 times more likely to have their account hijacked than a random Google user, a figure that falls to 10 times for victims of a data breach. The difference is due to the type of information that so-called phishing kits collect. Phishing kits contain prepackaged fake login pages for popular and valuable sites, such as Gmail, Yahoo, Hotmail, and online banking. They're often uploaded to compromised websites, and automatically email captured credentials to the attacker's account.


Key Steps to Building and Managing an Effective API Marketplace


Generally, an API marketplace comprises several components. In a typical scenario, producers first publish APIs, and these are then catalogued and displayed via an API developer portal. This encourages consumers of the APIs to access the developer portal directly or indirectly (via system APIs for instance) to find, discover, and explore them. The developer portal displays different types of APIs, grouped by division, category, type etc. With specific APIs, users can then test and subscribe to them. ... Successfully implementing a marketplace requires taking a more advanced approach to implementing some aspects of the API management system, most notably the API developer portal and analytics. At the same time, organizational practices will also play an important role in establishing a highly functional marketplace.


Assessing the business, societal value of AI capabilities

People are starting to understand that we can hand off cognitive tasks -- not just physical tasks -- that we used to ask experts to do. They're not exactly robotic tasks; they're very difficult tasks. For example, if you look at the oil and gas industry, a lot of oil and gas discovery is reading seismic responses. These things are monochrome; they look like a bunch of waves on a piece of paper. It's going to take a geoscientist with years of experience to recognize the pattern. What they're really doing is mentally extracting a set of features from the data, making some inferences about it and then trying to interpolate that against other forms of information. That other information includes things like maps, other types of surveys or even just information from local people who say, 'Once upon a time, there was a legend that there were puddles of oil in the ground there.'


Severe shortage of cyber skills poses data security threat

A report last month by the Information Systems Security Association (ISSA) and the IT analyst firm Enterprise Strategy Group (ESG), shed light on the scope of the problem and offered guidelines to businesses for easing the skill crunch. This was the second year in a row that the two organizations have partnered to conduct the study, and the results depict a widespread business problem that is becoming more severe. Nearly three-fourths of the respondents (70%) of the ISSA and ESG survey respondents indicate that the shortage of people with cyber-security skills has had an impact on their organization. Yet 62% of them also concede that they are falling behind in providing an adequate level of training for their data security personnel. And that figure is up almost 10% percent from last year’s study.




Quote for the day:


"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner


Daily Tech Digest - November 06, 2017

Google can read your corporate data. Are you OK with that?
The big concern from enterprises this week was not being locked out of Google Docs for a time but the fact that Google was scanning documents and other files. Even though this is spelled out in the terms of service, it’s uncomfortably Big Brother-ish, and raises anew questions about how confidential and secure corporate information really is in the cloud.  So, do SaaS, IaaS, and PaaS providers make it their business to go through your data? If you read their privacy policies (as I have), the good news is that most don’t seem to. But have you actually read through them to know who, like Google, does have the right to scan and act on your data? Most enterprises do a good legal review for enterprise-level agreements, but much of the use of cloud services is by individuals or departments who don’t get such IT or legal review.


How microservices governance has evolved from SOA


Governance with monoliths is centralized. Decisions are made top-down, and rigid control is maintained to ensure uniformity across the organization and the application stack. Over time, this model degenerates, creates a system that becomes technologically and architecturally stagnant and slows down the pace of innovation. Teams are forced to merely conform to the set order of things rather than look for new, creative solutions to problems. For microservices governance, a decentralized model works best. Just as the application itself is broken down into numerous interdependent services, large, siloed teams are broken down into small, multifunctional teams. This follows the progression from development, testing and IT teams morphing into smaller DevOps teams.



5 cyber threats every security leader must know about

The first is Consumer IoT. These are the devices we are most familiar with, such as smartphones, watches, appliances, and entertainment systems. Users insist on connecting many of these to their business networks to check e-mail and sync calendars, while also browsing the Internet and checking on how many steps they have taken in the day. The list of both work and leisure activities these devices can accomplish continues to increase, and the crossover between these two areas presents increasing challenges to IT security teams. ... The cloud is transforming how business is conducted. Over the next few years, as much as 92 percent of IT workloads will be processed by cloud data centers, with the remaining 8 percent continuing to be processed in traditional on-premises data centers.


Inside-Out: How IoT Changes Everything


"Design thinking is a way to place the user at the heart of the innovation process," he said. "Our company strategy is really that innovation is not coming from startups or technologies, but from the end users and the customer observation. It's really focused on the end user. We are working, for example, with ethnologists and psychologists to understand the problems and to describe the problems. It's really important for us." Celier explained that VISEO created specialized innovation centers as part of their One Roof program. The idea is to bring clients into their production studios, much like filmmakers bring all the talent into a studio for producing movies. "We are incubating our customer's project in our building. It's a way to go faster. They come with their vision, their idea, and they leave with a platform or product," he said.


Cybersecurity thwarts productivity and innovation, report says


The top priority of most organizations — cybersecurity — is hindering productivity and innovation, according to a recent report by Silicon Valley-based virtualization firm Bromium. Based on a survey of 500 chief information security officers in large organizations in the U.S., U.K. and Germany, 74 percent of respondents said end users were frustrated by how security requirements disrupt operations. "Our research found, on average, an organization gets complaints from users twice a week saying that legitimate work activity is being blocked or rejected by over-zealous security systems," the report reads. Citing that most — 88 percent — of organizations use a prohibition approach to cybersecurity, the firm suggests "a new approach" that allows more technological innovation within the organization.


Securing Smart Homes

“The industry is starting to get educated about the need for [better security],” Dirvin says. “Now they ask more questions about it and are willing to spend more time and effort,” but not always money. Manufacturers of smart home devices typically haven’t had to think about security in the same way as a medical device maker or a manufacturer of industrial automation. “It’s a whole new area for them, so they’re rushing to build connectivity and incorporate these devices into a broader IoT strategy,” says Warren Kurisu, director of product management in the embedded systems division at Mentor, a Siemens business. “The security, from a software perspective, is something they’re just now starting to realize that they need to do.” This is especially true in the wake of the Mirai attack. The number of connected devices is expected to reach 20.4 billion by 2020, according to Gartner.


Was BadRabbit a distraction? Malware 'used to cover up smaller phishing attacks'

Ransomware attack
"There is an open, let's say instantly obvious attack, while underneath there is a hidden, fairly well-thought-out attack, to which nobody pays attention," police chief Serhiy Demedyuk told attendees while speaking at the Reuters Cyber Security Summit in Kiev. "During these attacks, we repeatedly detected more powerful, quiet attacks that were aimed at obtaining financial and confidential information." He said the so-called "hybrid attack" – meaning a multi-pronged assault – was also found to be targeting users of a popular form of Russian accounting software called 1C. "The main theory we're working on now is that they [the hackers in both attacks] were one and the same," Demedyuk added. "The goal was to get remote and undetected access."


The Internet of Things is about much more than just connecting devices

The connected nature towards which we are migrating will allow manufacturers to better understand what their customers require on a real-time basis. This in turn enables the manufacturer to recalibrate not only the actual manufacturing part of the business and what they procure, but also to become highly competitive, super in-tune with what their customer requirements are, down to quality requirements per customer. That transparency will drive product improvement and customer satisfaction to new levels. Manufacturers will not order more raw material than they need. Think about latency and how this will be addressed. Consider this example: a customer wants a product; there’s the procurement of materials, import, export, shipping, logistics, manufacturing – it can take up to six months or more.


7 habits of highly effective digital transformations

7 habits of highly effective digital transformations
The collaborative efforts have paid off. “As a result of sharing practices, we have identified cases where we see a common failure mode in our continuous integration, delivery and operational practices — and then we are able to propagate the fix across all teams and improve and correct across all teams,’’ Fairweather says. Management also conducted a survey of its strategic foundational technology program. Fairweather recalls one comment an employee gave as feedback: “Instead of being a cog in the wheel I’m a better-informed contributor. The best part of learning from peers is gaining new contacts. We are more united as global organization in pursuing these 10 areas because we had done this.’’ ... As organizations get larger, different groups can begin to cut themselves off from one another, creating silos of information, he says.


6 Steps Up: From Zero to Data Science for the Enterprise

Different stakeholders have different views about the desire for a Customer360, but perhaps the most clarifying is that for a company to truly drive value and delight its customers, the business must understand those customers and approach every question from their perspective. Without a Customer360 built on a foundation of data science, the business will only ever have a qualitative view of customers. I believe a true, quantitative understanding of customers relies on rigorous data science. Less attention has been paid to the concept of a Product360, but it's no less important. Depending on the business, a Product360 can potentially drive more value through cost savings and cost avoidance than the business can derive from new revenue. The ultimate goal of a Product360 is creating assets that allow the business to explore each product from earliest inception through the end of its lifecycle.




Quote for the day:

"Instinct is intelligence incapable of self-consciousness." -- John Sterling