Analytics

Can AI Adoption be Catalyzed?

AI adoption

At the World Economic Forum 2017 in Davos, business leaders and technologists met to discuss the impact of the fourth industrial revolution on various aspects of industries. Many, multi-faceted, issues were brought to panel discussions – ranging from efficacy of AI to ethics, and even the impact on the workforce. One thing is clear – the most important driver for the 4th industrial revolution is the data or the ability to extract information from the data overload presented to us today. Leaders across the globe are sharing their viewpoints on this phenomenon and a common consensus is being sought. At the same time, there is a lot of focus on learning from each other. In such an interesting time, the OpenText CEO Blog is apt. Mark Barrenechea in his book “The Golden Age of Innovation” points out items that have time and again become the bedrock for the future. Whether it’s the skills or ethical practices or the intent – everything is of importance when we look at AI and its implementation in the industry. Mark’s cogent points are synonymous with the business leaders across the globe and across the industries. At the WEF 2017, in an interesting panel discussion with Ginni Rommety (CEO – IBM), Satya Nadella (CEO – Microsoft), Joi Ito (Director, MIT Media Lab) and Ron Gutman (Founder & CEO, HealthTap) were asked to share views on their learnings from the revolution fueled by AI. This discussion was thought provoking and well moderated by Robert F. Smith (Chairman & CEO, Vista Equity Partners). Some of the key takeaways from this panel discussion are summarized here – Element of trust – Every new technology or a change is successful when it can generate trust in the stakeholders. This trust can only be gained through transparency and practices that synchronize with the team. Knowledge of the business domain – Making sense of the vast datasets and an ability to extract information is crucial in the validation of the technology. Unless the information provides meaningful insight to the end user, the system will be treated as noise and the acceptance level will be low. Users will have more confidence in a solution when trained using data from their own domain and so proven. Understanding of the purpose – During the planning or inception of the solution, one must ask and understand the question – “What is the purpose of my solution?” or “What problem do I want to resolve?” The purpose or the intent provides a clear direction to the efforts. This also helps in boosting the trust and confidence of the stakeholders involved. Redevelopment of skills – An interesting comment was made that at a macro level there are billions of jobs available across the globe and there is a lack of working population in many countries. But what hurts is when one’s job is moved. As Mark points out in his series of articles, skills need to be redeveloped. Everyone is required to re-skill themselves as needed with the advent of new technologies. Even though some jobs may lose their value, new jobs will take their place. Sense of ownership – Joi Ito pointed out an outcome from a survey about autonomous cars. During the survey, people were asked should the car compromise its riders while avoiding a major accident involving many others or should it save the riders at the cost of others? The survey answers – the car should compromise the riders to save others, but the survey takers would never buy this car! The example here shows that technology will experience a better adoption rate when there is an ownership behind the same. When the outcomes are tied to responsibilities and ownership, the trust factor would improve tremendously. Yet, the AI technology that enables a machine to control other machines and provide algorithms beyond the human mind, still has a long way to go when it comes to wide adoption. Given the challenges and responsibilities, some have started to call AI as “Augmentative Intelligence” than “Artificial Intelligence”. EIM providers today stand at an interesting juncture wherein they can help the users understand their own data, augment their decisions, help them make better models and train their models. This is a time not just to look at AI, but your own EIM strategy! After all, the industry is undergoing a revolution.

Read More

Impact of Artificial Intelligence on Insurance Industry

Artificial Intelligence

At the RiskMinds Insurance convention, Tony Young, National Clinical Director for Innovation NHS England in a discussion with Rick Morley, Managing Director, Accenture Finance & Risk, mentioned a number of interesting points:  Global Healthcare is about $7 Trillion and the Global Insurance Industry is around $3 Trillion  A large startup sector is springing up in both insurance and healthcare sectors  There is a lot of commonalities and overlap in these two different industries  Technology plays an important role in being a disrupter in both sectors With all the health tracking gadgets – pedometers, calorimeters, fitness pals – something is changing. It’s the health and the longevity of a human being. Diseases or health problems that were once terminal, have become minor but new have surfaced. Tony Young particularly emphasized the use of technology – information technology in particular – as a driver for transformation and growth in future. With all the data collected, analyzed and then made available back to the users of the gadgets, the healthcare and the insurance industries are undergoing a major transformation today. Insurance – general or life, is no longer the focus of the traditional insurance practices. Clients are asking for more visibility and better personalized rules governing their policies. Insurance relies on their policy administration system (PAS) as the sacred engine, and it is often cumbersome to modify the rules in the PAS as it has overarching impacts, and is difficult for policy managers to track. This is precisely where an AI engine could help. “AI’s initial impact primarily relates to improving efficiencies and automating existing customer-facing, underwriting and claims processes,” Price Waterhouse Cooper said in a report about AI in insurance. “Over time, its impact will be more profound; it will identify, assess, and underwrite emerging risks and identify new revenue sources.” While we ponder upon the impact of AI on Insurance firms and their practices, a few have already been early adopters of the technology. See this article on the Outline.com – Allstate has a chat bot, the Allstate Business Insurance Expert, that leads customers through getting a quote. AIG invested in a company that sticks wearable trackers on workers in order to track safety. Financial insurance provider Manulife is partnering with an AI company to train a system to read news and emails. Chinese giant Baidu said it is already using AI systems to discover patterns that can be used in insurance underwriting. Another area in the Insurance industry that demands a case for AI is the claims process. With the improvements in BPM systems and their integrations with ECM systems (Documentum xCP), the claim processors have become smarter in tracking all the information related to a claim, thus, reducing the claim frauds. This has already been helpful in preventing losses according to the researchers at Insurance Nexis. The mixing of AI in such a process could only enhance the application of the rules and the effectiveness of the process. With the use of AI in policy administration, newer insurance products can be launched quickly and more effectively. They can even integrate with the fast inflow of information coming from various sources and provide a personalized service. Not only this is gratifying to the policyholder, but also rewarding for the policy administrator. Change is coming, though at a smaller pace than perhaps could be expected, given the obvious benefits.

Read More

Building Supply Chain Resilience and Compliance

supply chain resilience

As a connected set of companies that form the link between individual component sources and a final product, the supply chain bridges the gap between suppliers and the ultimate end user. When it works, it is an impressive orchestration of many moving parts working in concert to deliver products to customers. But what happens when a critical link in the chain fails due to business disruption, natural disaster, financial issues, or a problem within its own supply chain? In today’s globalized economy, companies’ supply chains are growing bigger and more complex. While these business relationships can deliver gains in productivity and profitability, they can come at the price of taking on additional risk exposure. Third-party risk management is the fastest growing governance, risk, and compliance (GRC) technology market and cited as most challenging aspect of a compliance program [Deloitte-Compliance Week 2015 Compliance Trends Report]. In fact, 77 percent of manufacturing firms report increasing supply chain complexity as the fastest growing risk in business continuity. Organizations are looking to technology to help ensure supply chain resilience, with a fierce focus on protecting their organization’s brand, reputation, assets, and data. Big supply chains call for big data Supply chain executives are placing great store in the potential of big data. In fact, an SCM Chief Supply Chain Officer Report showed that they believe big data analytics to be more valuable than the Internet of Things, cloud computing, and 3D printing. More manufacturing firms are adopting big data strategies to tackle a wide range of risk factors within the supply chain, including, minimizing risk within a global supply chain and managing supplier performance. Tip – Choose a big data analytics solution that is meant for business users and analysts who want an easy, fast way to access, blend, explore, and analyze data quickly without depending on IT or data experts, such as OpenText™ Big Data Analytics. You’re a good corporate citizen. Are your suppliers too? It has been well established that having a clear, effective corporate social responsibility (CSR) program is good for business. Many customers seek out and want to do business with vendors who share their values and compliance culture. For example, by demonstrating that a company’s supply chain is conflict-free, it will reassure stakeholders that the company is compliant and will engender trust among suppliers, consumers, and others. The SEC Dodd-Frank Act, Conflict Minerals rules, and the EU REACH mandate and ROHS Directive are just a few regulations forcing companies to take a hard look at their supplier ecosystems. However, compliance is threatened when suppliers fail to provide needed information. Only 22 percent of companies required to file conflict minerals reports by a June 2014 deadline did so – most stating that their supply chains were too complex, or that suppliers did not respond to questionnaires or did not provide complete or adequate responses. Further, since mandatory reporting in 2014, more than 70 percent of U.S. companies say they still cannot make a determination that their supply chains are free from conflict minerals. Tip – Firms are turning to sophisticated information exchange solutions for supplier self-assessment to ensure compliance in areas such as conflict minerals, anti-slavery, and sustainability, such as OpenText’s Conflict Minerals Reporting solution. Managing risk begins with onboarding process Given the vast amount of supplier data that exists across the enterprise, technology offers an easy way to import, structure, organize, and consolidate this data in one place, and then map it to the associated supplier risks, regulations, controls, locations, and products for better visibility. And a successful supplier information management program starts with the right supplier onboarding process. Tip – For B2B suppliers who use a defined EDI format to send and receive data, these suppliers easily buy into an onboarding system which uses a format they already use (typically high volume and large suppliers), such as OpenText™ B2B Managed Services. When it comes to supply chain disruptions, it is no longer a matter of “if” it will happen, but “when” the next incident will occur. Choosing a proactive approach and the right technology solutions will only improve your organization’s ability to mitigate, adapt, and respond quickly to threats as they arise – thus strengthening resilience in your supply chain.

Read More

Quality and Innovation From Bench to Clinic – the Role of Data Integrity

quality

According to a recent PwC survey of pharma CEOs, 35% consider strengthening innovation a top priority. At the same time, they are facing increasing challenges from more stringent regulatory requirements and rising standards of quality. The Life Sciences track at this year’s Enterprise World looks at the benefits of building the quality-centric enterprise within Life Sciences. Data integrity is a foundation stone of quality and at the very heart of Digital Transformation. How can organizations gain visibility and control over all their data? Digital Transformation, for me, is not just about replacing paper with digital-based processes. It is about releasing the value that an enterprise has in the data it holds. The challenge is that most organizations have developed as a series of departmental silos. This is certainly true for quality. Separate departments or business units have created their own home-grown or legacy systems – often on Excel spreadsheets – to manage product and process quality. Apart from being hugely expensive to maintain and prone to error, these systems lack any enterprise-wide visibility, ability to share information across the organization and its partners, or to manage by exception. How reliable is your quality data? Despite best efforts, the data held within many of these quality systems must be considered less than reliable. Estimates indicate that bringing a new drug to market costs more than a billion dollars and takes up to 15 years so the effective use of quality data can help drive innovation and reduce costs. For example, a new product submission to the FDA can run to 600,000 pages containing data from a wide range of sources. The proper document management of that submission is essential to remove error and cost from the process. Companies are faced with regulatory requirements such as GxPs, reporting mandates, international quality standards and other compliance issues. This is simply part of daily operations and assuring the integrity of data feeding and driving these systems is paramount. This situation would be challenging enough if it were only internal data that the organization had to worry about – neatly held within enterprise applications and databases. However, every Life Sciences company operates in an ecosystem of customers, supplier and research partners. The amount of data is exploding and most of it is unstructured. According to The Economist, as of 2017, more than 1.7 billion healthcare and fitness apps have been downloaded onto smart phones. These apps are collecting mountains of valuable data that companies can use to improve product development and inform innovation. Towards the quality-centric organization To become a quality-centric organization, you have to be able to effectively many multiple data types from multiple sources. This data has to be reliable and available to drive quality through your entire ecosystem (See figure 1). Figure 1: Key drivers for the end-to-end, quality-centric Life Sciences organization Supplier Enterprise Customer Supplier audit Containment & failure analysis Supplier corrective action Cost recovery Supplier KPI/scorecard Inspection/audits Non-conformance CAPA Containment & failure analysis Document control & records management Change control Digital marketing & communication Training Equipment calibration Safety management Quality scorecard Customer issue & complaint management CAPA Containment & failure analysis Change control Document control & records management Product change notification Quality scorecard   It is difficult – if not impossible – to achieve this level of data integrity without a central platform to manage that data and content. With the Documentum acquisition, OpenText is uniquely positioned to provide the most comprehensive Enterprise Information Management platform to Life Sciences companies. Enterprise application integration, B2B integration and advanced analytics build out the ECM solution to give you new levels of visibility and control of the data across end-to-end business processes. This is the focus for Enterprise World. There will be speakers from some of the world’s largest Life Sciences brands talking about their experiences of driving quality and innovation within their companies. There are still a few spaces available so register today. It would be great to see you there.

Read More

“It’s all About the Information!”

information

Chances are you’ve head this before. And if you are a Ben Kingsley or Robert Redford fan you even recognized the line from Sneakers (released in 1992).  Yes 1992. Before the World Wide Web  Remember Netscape didn’t launch the first commercially successful Web browser until 1993. Actually it’s always been about the information, or at least the right information – the information needed to make an informed decision, not just an intuitive one. Now in many ways the information, the data, has always been there; it’s just that until recently, it was not readily accessible in a timely manner. In today’s internetworked business climate we are more aware of how much data is available to us through technology, like the mobile device in your pocket –at 12GB an iPhone 6S is massively more than the 6Mb programs IBM developed to monitor an Apollo spacecraft’s environmental data. Which demonstrates the reality of Moore’s Law, but that’s another topic. Yet because it’s so easy to create and store large amounts of data today, far too often we’re drowning in data and experiencing information overload. Chances are right now you’re reading this in between deleting that last email, before your next Tweet, because the conference call you are on is being dominated by someone repeating the same information you provided yesterday.   Bernard Marr, a contributor to Forbes, notes “that more data has been created in the past two years than in the entire previous history of the human race.”  Marr’s piece has at least 19 other eye-opening facts about how much data is (and is starting to become) available to us but the one that struck me the most was # 20. “At the moment less than 0.5% of all data is ever analysed and used just imagine the potential here.” 0.5%! Imagine the opportunities missed. For example what if the transaction patterns of a customer indicated they were making more and more purchases of auto parts as well as making more  payments to their local garage (or mechanic). Combined with a recent increase in automatic payroll deposits, might that indicate this customer would be a good prospect for a 0.9% new car financing offer? Or imagine the crises which could be avoided. Think back to February 2016 and the now infamous multi-million dollar Bangladesh Bank heist. As you may recall thieves managed to arrange the transfer of $81 million to the Rizal Commercial Banking Corporation in the Philippines. While it’s reasonable to expect existing controls might have detected the theft, it turns out that a “printer error” alerted bank staff. The SWIFT interface at the bank is configured to print out a record each time a funds transfer is executed. But on the morning of Feb 5 the print tray was empty. It took until the next day to get the printer restarted. It also turns out the New York Federal Reserve Bank had sent queries to the Bank questioning the transfers. What alerted the Fed? A typo.  Funds to be sent to the Shalika Foundation, were addressed to the “Shalika fandation.” There’s obviously more to this story, but you can look at WIRED Magazine’s story now. Consider the difference if a certain the bank had the toolset able to flag the anomaly of a misspelled beneficiary in time to generate alerts and hold up the transfers for additional verification? As we know the thieves timed their heist to take full advantage of the week-end, it’s only a small step to have these alerts sent as an SMS text, or email to the bank’s compliance management staff. To best extract value from the business data available to you requires two things:  An engine and a network. The engine is one designed to perform the data driven analysis needed.. With OpenText™ Analytics Suite, financial institutions can not only derive data-driven insights to offer value added solutions to clients they can also better manage the risk of fraudulent payment instructions, based on insights derived from a client’s payment behavior, and the correlating fact that the beneficiary accounts had been opened in May 2015 and not been a previously used  beneficiary. But the other equally important tool is the network. As trains need tracks, analytical tools engine needs data (as well as the network to deliver it). Today more and more of this data needed to extract value comes from outside the enterprise. OpenText™ Business Network is one way thousands of organizations exchange the data needed to manage their business, and provide the fuel for their analytical engines. For example, suppose a bank wanted to offer their customers the ability to generate ad-hoc reporting through their banking portal.  With payment, collection, and reporting data flows delivered through Business Network’s Managed Services, the underlying data would be available for the bank’s analytical engine. Obviously much of the data involved in the examples I’ve provided would be sensitive, confidential, and would need robust information security controls to keep it safe.

Read More

Thoughts From Digiruption Indaba South Africa

Digiruption

Digiruption Indaba is our customer day in Africa – Indaba means ‘gathering’. This year, Digiruption was held at the Galleria in Johannesburg. And what a day it was! Fully packed with highlights. I’ve had the privilege of attending this event for the last 2 years and I’m amazed at the number of customers and partners that attend. This year we finished at just under 400 in total which was a great effort. After some energetic drumming to get everyone excited for the day, Country Manager Lenore Kerrigan acting as host gave a welcome address to everyone and thanked them for attending. This included long term customers, partners and the new customers who joined our family from ECD, HP and others. This was followed by a welcome message and video presentation from OpenText CEO and CTO Mark Barrenechea (who was unable to join us this year). Lenore then introduced our first guest speaker. “Exploration drives Innovation” Our first guest speaker was Dr Adriana Marais from SAP. Adriana is Head of Innovation at SAP Africa working on such diverse projects as Quantum Cryptography, Block chain and automated drone delivery. During her presentation one of these points was demonstrated by being joined by a drone on stage! All of which are joined by a common thread of being innovative, leading edge and of global benefits.. but wait.. As if those diverse projects weren’t enough to keep her busy, Adriana is one of the final 100 candidates for the extraordinary Mars One Expedition (you can learn about that here and here). She spoke passionately about how the next step for humans to solve our problems can only be found in Space and specifically on Mars. I think everyone came away with a great appreciation and a sense of ‘awe’ at people like Adriana who are prepared to take on some of the largest problems in the world, and solve them. “The best thing about social media it is allows anyone to be a celebrity, and the worst thing is it allows anyone to become a celebrity” Following that on the keynote was Emma Sadleir, who is a consultant lawyer and author specialising in Social Media Law. In her energetic and informative presentation she gave many examples of how to use Social Media correctly and how both famous and more importantly employees use Social Media incorrectly and what the liabilities are for both the company and the employee. Illustrating each point with examples and she had a fair few number of people quickly checking their Twitter accounts! I have asked Emma for her permission to use her image here (but this proves I was listening as if I had used it without permission I could have broken some laws.) The 4th Industrial Revolution and the OpenText Future Sandwiched between the two guest speakers, we had Thomas Dong, VP of Product Marketing presenting the OpenText keynote on our Strategic Direction, the impact of the 4th Industrial Revolution and the new technologies that affect and inspire Digital Transformation. Before that, Detlev Ledger had chaired a panel of our customers, (Capitec, Distell, MediClinic, NHLS and SANBS ) discussing such topics as their use of OpenText (including ECD), their future plans, experiences and their thoughts on the future of EIM. Lunch, Technology, Customers, Public Sector and the Silent Disco… After a nice lunch (including the High Commissioner for Canada saying a few words) everyone was free for some time to mingle with the partners and network. One of the things I’d not seen before was a partner innovation track – similar to a Silent Disco (for those of you not in the know – it’s where you all wear headphones to dance around in silence.. yeah, I wasn’t certain as well!) but in the context of partner day it allowed the partners to give a 15 minute breakout where the audience were not distracted by the background noise of the event. During the afternoon we had breakouts across three tracks and I visited breakouts in all 3 sessions and was pleased to see the partners and customers alike attending and interacting. The 3 tracks were based around OpenText solutions, Customers (all presented by customers) and Public Sector (which makes a sizable part of the African customer base). In the technology track myself, Albert Tay and Arsalan Minhas covered topics such as what’s new in EP2, low Code with Process Suite, where to being with Digital Transformation and AI Implications for the Enterprise. In the Customers track, Distell presented their Release 16 (Distell are a long time customer and are typically one of the first organisations to deploy any new Content Server releases) Capitec Bank (its great to see our new ECD customers already comfortable to talk at an OpenText event) described their approach to Client Centricity. Finally, Mediclinic presented how xECM for SuccessFactors is allowing them to achieve their goal of a central HR functions. Mediclinic are the first customer of OpenText’s to be live on the xECM for SuccessFactors solution and have a great story to tell. In the public sector track we had another ECD customer, the Department for Social Development, presenting their Social Grant Appeals Process, the South African National Blood Service talked about their enterprise wide OpenText and SAP Implementation and finally the National Health Laboratory Service presented their ECM journey. Awards and Final Thoughts Following the event we all adjourned to the roof of the Galleria for food and well deserved cocktails and prize giving to watch the sun go down over Johannesburg. Amongst those receiving prizes were SAP for partner of the year, MediClinic for Go-Live project of the year, Capitec for Customer Visionary, and Engen for Customer of the Year. What really stood out at the event for me was the level of investment and trust that the customers have placed in OpenText and how critical in most cases our solution. Be it from face-to-face discussions or email feedback from various companies, the response to our EIM strategy and acquisitions was overwhelmingly positive. So, from all of us at OpenText attending to all our customers and our partners, Thank you for helping is have a great customer day and hopefully we will all see you each other again next year. Photos by Des Ingham-Brown, Blowfish Productions

Read More

Are We There Yet With Content Analytics and Intelligence?

Content

Artificial Intelligence is a disruptive yet intuitive technology that is shaping the Enterprise Content Management systems of the future. While ECM systems have relied on passive responses, AI is stressing the use of active responses and making workflows based on findings from the machine learning aspects of AI. The use of AI is seen as beneficial in increasing enterprise knowledge or leveraging the enterprise assets. Today ECM implementations are often faced with questions on how to boost AI or make content intelligent. This blog discusses how the ECM offering from OpenText has embraced the challenges posed by AI. Content Intelligence Services Content intelligence is the ability to provide structure for unstructured content – a process enabled by the intelligent, automated tagging and categorizing of business content for personalized delivery and easy search. Tagging and categorizing content with rich metadata makes it available for reuse across multiple initiatives such as customer and employee portals, custom applications, and personalized sites where capabilities such as precise search, easy navigation, and personalization are critical to productivity and customer satisfaction. By extending the value of content through reuse, content intelligence also cuts the costs of recreating information. However, despite the value of content intelligence for advanced searching and personalization, companies have been slow to embrace this technology because it is often perceived as complex or difficult to implement and deploy. There are multiple debates within both business and technical domains as to the most optimized manner for creating the taxonomies and hence the tags for the content. Furthermore, this technology is sometimes considered inflexible, particularly when changing market conditions or new business opportunities require changes in the way content needs to be organized. Many point solutions break down under these conditions because complex integrations with other products need to be reconfigured, resulting in re-implementation costs and valuable time wasted. OpenText Documentum Content Intelligence Services OpenText™ Documentum Content Intelligence Services help enterprises create taxonomies and tag their content automatically. OpenText Documentum CIS is a high-performance extension of the OpenText Documentum platform which automates and controls the enrichment and organization of enterprise content based on powerful information extraction, conceptual classification, business analysis and taxonomy and metadata management capabilities. It is fully integrated with the Documentum platform and thus provides an automated solution for all kind of content managed within an OpenText Documentum repository. OpenText Documentum CIS turns unstructured enterprise content into intelligent structured content with powerful, unique capabilities: OpenText Content Intelligence OpenText™ Content Intelligence is an add-on to the OpenText™ Content Suite Platform that helps increase user adoption, productivity, and management insight by accelerating the deployment of tailored applications, actionable dashboards, and reports. It bundles accelerated Tile/Widget creation, enhanced REST API, and a powerful sub-tag library with a complete set of instantly deployable and easily modifiable prebuilt reports, dashboards, and applications. OpenText Analytics OpenText™ Analytics revolutionizes analytics and reporting functions of the ECM solution. The Analytics suite enables creation of dashboards, visualizations, and analytics applications that answer vital questions across the organization. OpenText Analytics Suite comprises two deeply integrated products: Big Data Analytics and Information Hub (iHub). Working in tandem, these two products give business users, business analysts and citizen data scientists the ability  of independent data preparation, data exploration, advanced analytics and sharing and socializing the analysis results in compelling dashboards, as interactive data visualizations, or pixel perfect reports. OpenText Information Hub (iHub) OpenText™ Information Hub (iHub) is a scalable analytics and data visualization platform. iHub enables IT teams to design, deploy, and manage secure, interactive web applications, reports, and dashboards fed by multiple data sources. iHub supports high volumes of users, and its integration APIs enable embedded analytic content in any app, displayed on any device. iHub enables report developers to design, deploy, and deliver secure, interactive web applications, personalized reports, and dashboards fed by multiple data sources. With an iHub application, business users can explore data on their own with interactive capabilities such as drill-downs, filter, group, build new calculated columns. Also available: Interactive Viewer Dashboards Analytics Studio OpenText Big Data Analytics OpenText™ Big Data Analytics is an analytics software solution for business users and analysts looking for an easy, fast way to access, blend, explore and analyze data quickly without depending on IT or data experts. Advanced analytics helps businesses understand their customers, markets, and business operations as well as managing IT budgets effectively, leveraging expert resources as needed. Big Data Analytics combines an analytical columnar database that easily integrates disparate data sources, with built-in statistical techniques for profiling, mapping, clustering, forecasting, creating decision trees, classification and association rules, doing regressions, and correlations. Users can get a 360-degree view of their business, explore billions of records in seconds and apply advanced and predictive analytics techniques all in a drag-and-drop experience, with no complex data modeling or coding required. Working in an easy-to-use, visual environment business user can upload their own data, clean and enrich it, blending data from disparate sources. They can then apply pre-coded algorithms and analytical techniques to gain insights from massive data sets on the fly. OpenText has both on-premises as well as cloud versions of Big Data Analytics solutions for the customers. There are also solutions specifically tailored for certain industries: Analytics for Energy & Utilities Analytics for Public Sector Analytics for Manufacturing Analytics for Financial Services Analytics for eCommerce & Retail Analytics for Logistics & Warehouses Analytics for Telecommunications Analytics for Publishing & Media Analytics for Healthcare Analytics for Marketing Service Providers Some of the advantages of choosing OpenText Big Data Analytics are: All your data in a single view: Access huge data sets from multiple sources quickly and easily. Analyze billions of records in seconds: Leverage high-performance, real-time Big Data analysis of hundreds of tables, millions of rows, billions of records—at once—for deeper business insights. No complex data modeling: Eliminate the need for data cubes, pre-processing and modeling. Minimize data analysis-related IT workload and dependency on data scientists. Best Practices analytical techniques: Pre-built algorithms and ready to use predictive analytic techniques. Discover threats, hidden relationships, patterns, profiles and trends to make fact-based decisions. No coding required: Go from raw data to sophisticated data visualizations in minutes with a few clicks. Easy to use visuals: Analyze Big Data quickly and visually with decision trees, association rules, profiling, segmentation, Venn diagrams and more. User autonomy and self-sufficiency: Empower users without statistical backgrounds to run deep analytics with pre-packaged algorithmic functions. Automate with Workflow: String together multiple steps into one process and schedule it to run on a regular basis. For more details on the benefits of OpenText Big Data Analytics, click here Conclusion Big Data Analytics is well targeted towards specific industries and allows the business users to create rich reports by analyzing data from various data sources. The solution provides easy configuration tools for the business users and reports creators to design and develop a report quickly. At the same time, Documentum Content Intelligence Services provides a mechanism for users to mine structured and unstructured content to automatically gather information as configured by the business users. Both solutions allow easy information extraction from unstructured content and allow for readable and meaningful reports. While both solutions provide two different sides of the puzzle, there seems to be an opportunity  remaining in Information Extraction techniques and development of self-learning, intelligent algorithms to parse the unstructured data into meaningful structures. The resulting structures could then be utilized for creation of reports and providing a feedback to the users or system workflows, thus rendering the system even more intelligent. More to follow on this topic shortly.

Read More

MRDM Uses OpenText Analytics to Improve Health Care Outcomes

health care

One of the high-potential use cases for Big Data is to improve health care. Millions of gigabytes of information are generated every day by medical devices, hospitals, pharmacies, specialists, and more. The problem is collecting and sorting through this enormous pool of data to figure out which hospitals, providers, or treatments are the most effective, and putting those insights into the hands of patients, insurers, and other affected parties. Finally, that promise is starting to become reality. A Dutch company, Medical Research Data Management (MRDM), is using OpenText™ Analytics to help the Netherlands’ health care system figure out the most productive and cost-efficient providers and outcomes. The effort to make data collection faster, easier, and more accurate is already paying off. For example, hospitals using MRDM’s OpenText-based analytics and reporting solution for evaluating medical data have been able to reduce complications after colon cancer surgeries by more than half over four years. MRDM chose OpenText Analytics after it realized it needed a robust technical platform that could support more complex, sophisticated medical reporting solutions, and larger volumes of data, than the platform it had been using since it was founded in 2012, open-source BIRT (Business Intelligence Reporting Tool). It rejected many other commercial solutions because they either lacked key functionality or had an inconvenient pricing structure.  (OpenText allows an unlimited number of end users.) The OpenText Analytics components that MRDM is using include a powerful deployment and visualization server that supports a wide range of personalized dashboards with an easy-to-use and intuitive interface. This means MRDM can easily control who sees what. For example, hospitals get reports and visualizations that are refreshed every week with raw data about the outcomes of millions of medical procedures. They can review the findings and pinpoint any inaccurate data before approving them for publication.  Next, MRDM handles release of these reports in customized formats to insurance companies, Dutch government agencies, and patient organizations.  With more detailed information in hand, they can make better decisions leading to better use of limited health care resources. To learn more about this exciting customer success story, including MRDM’s plans to expand throughout Europe and further abroad, click here.

Read More

Artificial Intelligence and EIM

Artifical Intelligence

During a recent visit to Los Angeles, California, I happened to stay at Residence Inn Marriott at LAX. Unable to sustain my hunger pangs in the middle of the night, I ordered some food. And I had the best, and the most surprising experience!. The food arrived quickly and was not carried by a server, but a robot – Wally! Wally is a 3 feet tall robot that moves on wheels, can be programmed for the room number and delivers to the room. More than being served by a robot, I was fascinated by the amount of information processing and intelligence built into the machine to be able to take precise turns, get on the right elevator, reach the correct floor and then the correct door number! I was later told that the number of foot falls and the room service requests have increased since Wally has been put to service. Piqued by my interest, I later found Hilton Hotels also deployed a robot “Connie” as a concierge at Hilton in McLean, VA. Connie can greet the guests and answer their questions about the services, amenities and local attractions. Named after the Hilton chain’s founder Conrad Hilton, Connie is powered by machines delivering Artificial Intelligence (AI). Robots delivering a great experience to hotel guests are an example of how Artificial Intelligence coupled with devices can perform tasks that are repeatable, process-oriented, rule-based operations.  AI works on the principle of analyzing data, identifying patterns and turning data into information that may be useful in decision making. This form of AI has been very popular and has been in existence for a long time. Its populist nature and long term existence stems from the underlying principle that it is rules based and can only predict from a fixed set of probably outcomes, based on the information already provided. This form of AI was initially seen in 1997 when IBM’s Deep Blue won a game against Garry Kasparov – Chess Grand Master. Though the computer was retired soon after, the concept of a machine adapting to a large set of rules and able to make decisions became a reality. Later, Apple’s Siri, Google’s Google Now, Microsoft’s Cortana and Amazon’s Alexa enhanced the powers of AI and entered our daily lives. This form of intelligence which is primarily ability to compute is known as Applied AI or Weak AI or Narrow AI. This is developed quickly to solve a purpose. Amazon, Apple, Google, Microsoft have yet not ended their quest in being your own personal assistant. They are aiming to be able to understand your emotions when you talk to them, which requires a context in which the data is provided to them. And with this, they want to develop the ability to be able to negotiate decisions for you. Tesla and Google have already tried to take it to the next level by releasing autonomous auto driving software and devices. AI in the true sense. This form of AI is known as the General Purpose Artificial Intelligence. AI is exciting and is growing in presence and applications every day. The stories from Sci-Fi are becoming reality sooner than later. However, at the heart of its growth lies the importance of abundance of data. Data that can be managed, mined, analyzed and processed to get information. Enterprise Information Management has an important role to play in the growth of AI in enterprises. With its ability to store, manage and present data, EIM is only bridging the gap today.

Read More

AI-Enhanced Search Goes Further Still With Decisiv 8.0

Decisiv

OpenText™ Decisiv extends enterprise search with the power of unsupervised machine learning, a species of AI. I recently blogged about how Decisiv’s machine learning takes search further, helping users find what they’re looking for, even when they’re not sure what that is.   Now, Decisiv 8.0—part of OpenText™ Release 16 and EP2—takes the reach and depth of AI-enhanced search even further. Take Search to More Places In addition to being embedded in both internal and external SharePoint portals, Decisiv has long been integrated with OpenText eDOCS, enabling law firms to combine AI-enhanced search with sophisticated document management. Decisiv also connects to OpenText™ Content Suite, Documentum, and a wide range of other sources to crawl data for federated search. Decisiv 8.0 expands these integrations with the introduction of a new REST API. With this release, administrators can efficiently embed Decisiv’s powerful search capabilities into an even broader range of applications, such as conflicts systems, project management, CRM, and mobile-optimized search interfaces. Take Search Deeper Other enhancements in Decisiv 8.0 include a new Relevancy Analysis display, which shows researchers precisely why their search results received the rankings they did and even lets them compare the predicted relevance of selected documents. This enhancement helps researchers to prioritize their research more effectively and administrators to understand how the engine is functioning and being leveraged across the enterprise. New Open Smart Filter display options also help researchers benefit from using metadata filters to zero in on useful content. By opting to automatically show the top values in each filter category (left side of the screen below), administrators can educate researchers on how to use filters for faster access to the content they need, without training or explanation. Decisiv Goes Beyond Legal Decisiv’s premier law firm customer base leaves some with the impression that Decisiv is just for legal teams. In fact, Decisiv’s machine learning isn’t limited to any specific industry or use case. That’s because it analyzes unstructured content on a statistical basis, rather than taxonomical. (Surprisingly, sometimes lawyers do lead the way on versatile technology.) …and Decisiv Goes to Toronto Learn more about Decisiv Search and our other award-winning Discovery Suite products at Enterprise World this July. You’ll hear from top corporate, law firm, and government customers how their enterprises are leveraging OpenText’s machine learning to discover what matters in their data.

Read More

Step-by-step Guide: Integrate Market Leading Analytics Engines With InfoArchive

Analytics

Gaining further insights from your data is a must-have in today’s enterprise. Whether you call it analytics, data mining, business intelligence or big data – your task will still be to gain further insights from the massive heap of data. But what if your data has already been archived? What if your data now resides in your long term archiving platform? Will you be able to use it in all analytics scenarios? Let me demonstrate how easily it can be done if your archiving platform is OpenText™ InfoArchive (IA). A customer recently requested a demonstration of integration with analytics/BI tools in a workshop we were running. The question asked was about the possibilities in InfoArchive to integrate with third party analytics engines? The answer is – everything in InfoArchive is exposed to an outside world in the form of REST API. When I say everything I mean every action, configuration object, search screen – literally everything. So we decided to use REST API for the analytics integration demo to the customer. What Analytics/BI tool to pick? Quick look at the Gartner Magic Quadrant has some hints. I’ve been using Tableau with InfoArchive in the past so let’s look at another option in the Gartner list: Qlik. OpenText™ Analytics (or it’s open source companion BIRT) is my other choice – for obvious reasons. Let’s get our hands dirty now! Qlik Qlik Sense Desktop seems to have a simple UI but there are some powerful configuration options hidden behind the nice façade. In Qlik to query a third party source simply open the Data load editor and create a new connection. Pick Qlik REST Connector and configure it. The connection configuration screen enables you to specify the URL of the request, request body and all necessary header values. All you need for a quick test. Now that the connection is configured you’ll have to tell Qlik how to process the IA REST response. Click the “Select data” button in your connection and Qlik will connect to InfoArchive, execute the query and show you the JSON results in a tree and table browser. All you need to do is to pick the column names that you want Qlik to process as shown below: Since the IA REST response columns are stored in name-value elements we have to transpose the data. This can be easily done with 20 lines of code in the Qlik data connection: Table3: Generic LOAD * Resident [columns]; TradesTable: LOAD Distinct [__KEY_rows] Resident [columns];   FOR i = 0 to NoOfTables()   TableList:   LOAD TableName($(i)) as Tablename AUTOGENERATE 1   WHERE WildMatch(TableName($(i)), 'Table3.*'); NEXT i FOR i = 1 to FieldValueCount('Tablename')   LET vTable = FieldValue('Tablename', $(i));   LEFT JOIN (TradesTable) LOAD * RESIDENT [$(vTable)];   DROP TABLE [$(vTable)]; NEXT i We’re almost done. Let’s visualize the data in a nice report now. Select “Create new sheet” on the Qlik “App overview” page and now add tables and charts to present your data. My example can be seen below: Just click “Done” on the top of the screen and you’ll be able to see the end user view: browse the data, filter it and all charts will dynamically update based on your selection. Job done! Continue reading on Page 2 by clicking below.

Read More

Find More Knowledge in Your Information at Enterprise World 2017

If your office is like most, it’s got millions of gigabytes full of information stashed away on computer hard drives – and maybe even file cabinets full of paper! Every single business process generates enormous data streams – not just your ERP and CRM systems, but payroll, hiring, even ordering lunch from the caterer for those regular Thursday meetings. So wouldn’t you like to find out how you can leverage the knowledge already contained in all that information? And derive more value from your existing systems of record? Come to OpenText Enterprise World this July and you’ll hear how organizations in every industry are using the cutting-edge techniques of OpenText™ Analytics to derive more value from their data – including self-service access, prediction, and modeling, and innovative techniques to get insights more easily out of unstructured data (aka the stuff you use most of the time: documents, messages, and social media). We are excited to showcase OpenText Magellan at this year’s conference and  show you the impact it will have in helping analyze massive pools of data and harness the power of your information. We’ll also preview the roadmap of new developments in the OpenText Analytics Suite. Helping Our Human Brains Navigate Big Data Thanks to cheap and abundant technology, we have so much data at our disposal – creating up to 2.5 exabytes a day by some estimates – that the sheer amount is overwhelming. In fact, it’s more than our human brains can make sense of.  “It’s difficult to make decisions, because that much data is more than we can make sense of, cognitively,” says Lalith Subramanian, VP of Engineering for Analytics at OpenText. “That’s where machine learning and smart analytics come into the picture,” he explains. “We intend to do for Big Data what earlier reporting software companies tried to do for business intelligence – simplify it and make it less daunting, so that reasonably competent people can do powerful things with Big Data.” Expect plenty of demos and use cases, including a look at our predictions from last year’s Enterprise World about who would die on Season 6 of “Game of Thrones,” and new prognostications for Season 7. Do-It-Yourself Analytics Provisioning Meanwhile, OpenText also plans to unveil enhancements to the Analytics Suite that will help give users even more power to blend and explore their own data. OpenText™ iHub , our enterprise-grade deployment server for interactive analytics at the core of the Analytics Suite, is adding the ability to let non-technical users provision their own data for analysis, rather than relying on IT, Subramanian says. They can freely blend and visualize data from multiple sources. These sources will soon include not just structured data, such as spreadsheets and prepared database files or ERP records, but unstructured data including text documents, web content, and social media streams. That’s because new algorithms to digest and make sense of language and text are getting infused into both OpenText Analytics and OpenText™ InfoFusion, an important component in the content analytics process. With OpenText™ Big Data Analytics, users will be able to apply these new, customized algorithms to the self-provisioned data of many types. At the same time, InfoFusion is adding adapters to pull content off Twitter feeds and web sites automatically. The Word on the Street One use case for this combination of OpenText InfoFusion and the Analytics Suite is to research topics live, as they’re being discussed online, Subramanian adds. “You could set it up so that it goes out as often as desired to see the latest things related to whatever person or topic you’re interested in. Let’s say OpenText Corporation – then it’ll go look for news coverage about OpenText plus the press releases we publish, plus Tweets by and about us, all aggregated together, then analyzed by source, sub-topic, emotional tone (positive, negative, or neutral), as we’ve demonstrated with our content analytics-based Election Tracker. Over time we’d add more and more (external information) sources.” Keep in mind, politicians, pundits, and merchants have been listening to “the word on the street” for generations. But that used to require armies of interns to go through all the mail, voice messages, conversations, or Letters to the Editor – and the net result was score-keeping (“yea” vs. “nay” opinions) or subjective impressions. Now these opinions, like every other aspect of the digital economy, can be recorded and analyzed by software that’s objective and tireless. And they can add up to insights that enrich your business intelligence for better decision-making. To see and hear all of this in person, don’t miss Enterprise World in Toronto, July 10-13. Click here for more information and to register.

Read More

What a Difference a Day Makes: Get up to Speed on OpenText Analytics in 7 Hours

Analytics Workshop

One of the biggest divides in the work world these days is between people with software skills and “business users” – the ones who can work their magic on data and make it tell stories, and… well, everyone else (those folks who often have to go hat in hand to IT, or their department’s digital guru, and ask them to crunch the numbers or build them a report). But that divide is eroding with help from OpenText™ Analytics. With just a few hours’ training, you can go from absolute beginner to creating sophisticated data visualizations and interactive reports that reveal new insights in your data. And if you’re within travel distance of Washington, D.C., have we got an offer for you! Join OpenText Analytics Wednesday, May 10, at The Ritz-Carlton, Arlington, VA for a free one-day interactive, hands-on analytics workshop that dives deep into our enterprise-class tools for designing, deploying, and displaying visually appealing information applications. During this workshop, you’ll gain insights from our technical experts Dan Melcher and Geff Vitale. You’ll learn how OpenText Analytics can provide valuable insights into customers, processes, and operations, improving how you engage and do business. We recently added a bonus session in the afternoon, on embedding secure analytics into your own applications. Here, you’ll see why many companies use OpenText™ iHub to deliver embedded analytics, either to customers (e.g. through a bank’s portal) or as an OEM app vendor, embedding our enterprise-grade analytics on a white-label basis to speed up the development process. Here’s what to expect in each segment: Learning the Basics of OpenText Analytics Suite Get introduced to the functions and use cases of OpenText Analytics Suite, including basic data visualizations and embedded analytics. Start creating your own interactive reports and consider what this ability could do for your own business. Analyze the Customer You’ll learn about the advanced and predictive analysis features of the Analytics Suite by doing a walk-through of a customer analysis scenario. Begin segmenting customer demographics, discovering cross-sell opportunities, and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. Drive Engagement with Dashboards A self-service scenario where you create and share dashboards completely from scratch will introduce the dashboarding and reporting features of OpenText Analytics. See how easy it is to assemble interactive data visualizations that allow users to filter, pivot, explore, and display the information any way they wish. Embed Secure Analytics with iHub After the lunch break, learn how to enable secure analytics in your application, whether as a SaaS or on-premise deployment. OpenText answers the challenge with uncompromising extensibility, scalability, and reliability. Who should attend? IT directors and managers, information technology managers, business analysts, product managers and architects Team members who define, design, and deploy applications that use data visualizations, reports, dashboards, and analytics to engage their audience Consultants who help clients evaluate and implement the right technology to deliver data visualizations, reports, dashboards and analytics at scale If you are modernizing your business with Big Data and want your entire organization to benefit from compelling data visualizations, interactive reports and dashboards – then don’t miss this free, hands-on workshop! For more details or to sign up, click here. And if you’d really like to dive into the many facets of OpenText Analytics, along with Magellan, our next-generation cognitive platform, and the wide world of Enterprise Information Management, don’t miss Enterprise World, July 10-13 in Toronto.  For more information, click here.

Read More

Enterprise World: Analytics Workshop Takes You From Zero to Power User in 3 Hours

Analytics Workshop

One of the great things about OpenText™ Analytics Suite is its ease of use. In less than three hours, you can go from being an absolute beginner to creating dynamic, interactive, visually appealing reports and dashboards. That’s even enough time to become a “citizen data scientist,” using the advanced functionalities of our Analytics Suite to perform sophisticated market segmentation and make predictions of likely outcomes and customer behavior. So by popular demand, we’re bringing back our Hands-On Analytics Workshop at Enterprise World 2017, July 10-13 in Toronto. The workshop comprises three 50-minute sessions on Tuesday afternoon, July 11. Just bring your laptop, connect to our server, and get started with a personalized learning experience. You can attend the sessions individually – but for the full experience, you’ll want to attend all three. Learn how businesses and nonprofits use OpenText Analytics to better engage customers, improve process and modernize their operations by providing self-service analytics to a wide range of users across a variety of use cases. This three-part workshop is also valuable for users of OpenText™ Process Suite, Experience Suite, Content Suite, and Business Network. Here’s what to expect in each segment: 1. ANA-200: Learning the Basics of OpenText Analytics Suite This demo-packed session serves as an introduction to the series, and will arm you with all you need to know about the OpenText Analytics Suite, including use cases, benefits and customer successes, as well as a deep dive into product features and functionality. Through a series of sample application demonstrations, you will learn how OpenText Analytics can meet any analysis requirement or use case, including yours! This session serves as a perfect lead-in for the next 2 sessions: ANA-201 and ANA-202. 2. ANA-201 Hands-On Workshop: Using Customer Analytics to Improve Engagement This hands-on session will introduce the advanced and predictive analysis features of the Analytics Suite by walking you through a customer analysis scenario using live product. Connect from your own laptop to our server and begin segmenting customer demographics, discovering cross-sell opportunities and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. You will learn how OpenText Analytics can provide valuable insights into customers, processes and operations, improving how you engage and do business. 3. ANA-202 Hands-On Workshop: Working with Dashboards to Empower Your Business Users This hands-on session will introduce the dashboarding and reporting features of OpenText Analytics by walking you through a self-service scenario where you create and share dashboards completely from scratch. Connect from your laptop to our server and see just how easy it is to assemble interactive data visualizations that allow users to filter and pivot the information any way they wish, in just a matter of minutes! You will learn how OpenText makes it easy for any user to analyze and share information, regardless of their technical skill. Of course, we have plenty of other interesting sessions about OpenText Analytics planned for Enterprise World. Get a sneak peek at product road maps, exciting new features (including developments in Magellan, our cognitive software platform), and innovative customer use cases for the OpenText Analytics Suite. Plus, get tips from experts, immerse yourself in technical details, and network with peers, and enjoy great entertainment. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

Knorr-Bremse Keeps the Wheels Rolling with Predictive Maintenance Powered by OpenText Analytics

diagnosis

Trains carry billions of passengers and tons of freight a year worldwide, so making sure their brakes work properly is no mere routine maintenance check. Helping rail transport operate more safely and efficiently is top-of-mind for the Knorr-Bremse Group, based in Munich, Germany. The company is a leading manufacturer of brakes and other components of trains, metro cars, and buses. These components include sophisticated programming to optimize operations and diagnosis. The company developed iCOM, an Internet of Things-based platform for automated maintenance and diagnosis.  Through onboard sensors, iCOM (Intelligent Condition Oriented Maintenance) gathers data wirelessly from more than 30 systems throughout a train car, including brakes, doors, wipers, heating and ventilation.  These IoT sensors continually report back conditions such as temperature, pressure, energy generation, duration of use, and error conditions. iCOM analyzes the data to recommend condition-based, rather than static, scheduled maintenance. This means any performance issue can be identified before it becomes a serious safety problem or a more costly repair or replacement. For iCOM customers, this means better safety, more uptime, improved energy efficiency,  and lower operating costs for their rail fleets.   As more customers adopted the solution, they began demanding more sophisticated analysis (to see when, where, and even why an event happens), more visually engaging displays, and the ability to build their own reports without relying on IT. Knorr-Bremse knew it needed to upgrade the technology it was using for analysis and reporting on the vast quantities of data that the iCOM solution gathers, replacing open-source BIRT (Business Intelligence and Reporting Tools). A new analytics platform would also have to be scalable enough to cope with the enormous volumes of real-time data that thousands of sensors across a rail fleet continually generate. Further, Knorr-Bremse needed an analytics solution it could develop, embed into the overall iCOM platform, and bring to market with the least possible time and coding effort. The answer to these challenges  was OpenText™ Analytics Suite. “Due  to the easy-to-use interface of OpenText Analytics, our develop­ers were quickly productive in developing the analytics and report­ing aspects of iCOM. iCOM is based on Java and consequently it has been very easy to integrate and embed the OpenText Analytics platform [into it]. It is not just about shortening the time to develop, though. The results have to look good  and with OpenText, they do,” says Martin Steffens, the iCOM digital platform project manager and software architect at Knorr-Bremse. To learn more about Knorr-Bremse’s success with OpenText Analytics, including a potential drop of up to 20 percent in maintenance costs, click here.

Read More

Discovery Rises at Enterprise World

This summer will mark a full year since Recommind became OpenText Discovery, and we’re preparing to ring in that anniversary at our biggest conference yet: Enterprise World 2017! We’re inviting all of our clients, partners, and industry peers to join us for three days of engaging roundtables, interactive product demos, Q&A with experts, a keynote from none other than Wayne Gretzky, and—of course—the latest updates, roadmaps, and visions from OpenText leaders. Here’s a sneak peek of what to expect from OpenText Discovery’s track: The Future of Enterprise Discovery. We’ll be talking at a strategic and product-roadmap level about unifying Enterprise Information Management (EIM) with eDiscovery. New data source connectors, earlier use of analytics, and even more flexible machine learning applications are on the way! Introduction to eDiscovery. Our vision for the future of eDiscovery is broader than the legal department, and we’re spreading that message with sessions tailored for IT and data security professionals that want to know more about the legal discovery process and data analysis techniques. Why Legal is Leading the Way on AI. Our machine learning technology was the first to receive judicial approval for legal document review, and in the years since, we’ve continued to innovate, develop, and expand machine learning techniques and workflows. In our sessions, we’ll highlight current and future use cases for AI for investigations, compliance, due diligence, and more. Contract Analysis and Search. We’ll also have sessions focused exclusively on innovations in enterprise search and financial contract analysis. Join experts to learn about the future of predictive research technology and the latest data models for derivative trading optimization and compliance. Our lineup of sessions is well underway and we’ve got an exciting roster of corporate, academic, government, and law firm experts including a special keynote speaker on the evolving prominence of technology in law. Register here for EW 2017  with promo code EW17TOR for 40% off and we’ll see you in Toronto!

Read More

From KPIs to Smart Slackbots, Hot New Analytics Developments at OpenText Enterprise World 2017

Innovation never sleeps in the OpenText Analytics group, where we’re working hard to put together great presentations for Enterprise World 2017, July 10-13 in Toronto. We offer a sneak peek at product road maps, exciting new features and innovative customer use cases for the OpenText Analytics Suite. Plus, you can get hands-on experience building custom-tailored apps, get tips from experts, immerse yourself in technical details, and network with peers. Learn about: Reporting and dashboards with appealing, easy-to-create visual interfaces Self-service analytics to empower your internal users and customers and help you make better decisions Best-of-breed tools to crunch massive Big Data sets and derive insights you never could have before Cognitive computing and machine learning Capturing the Voice of the Customer Structured and unstructured content analytics that can unlock the hidden value in your documents, chats, and social media feeds. Our presentations include: Industry-focused sessions including OpenText Analytics for Financial Services. Hear how we add value in common use cases within the financial industry, including customer analytics, online consumer banking, and corporate treasury services. Showcases of hot new functions like Creating Intelligent Analytic Bots for Slack (the popular online collaboration tool). Personalized training in OpenText Analytics. Our three-part Hands-On Analytics Workshop can get you from an absolute beginner to competent user, harnessing the power of Big Data for better insights and build compelling data visualizations and interactive reports and dashboards. Technical deep dives with popular tools such as Business Performance Management Analytics. We’ll show you how to use OpenText Analytics to measure KPIs and performance-driven objectives, including the popular Balanced Scorecard methodology. A fascinating use case: Financial Contract Analysis with Perceptiv. See how customers are using our advanced analytics tool to capture, organize, and extract relevance from over 200 fields in half a million financial derivative contracts. How Many Lawyers Does It Take to Analyze an Email Server? Learn how lawyers and investigators are using our cutting-edge OpenText Discovery technology, including email  mapping, concept-based search, and machine learning, to find the “smoking guns” in thousands of pages of email. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

For Usable Insights, You Need Both Information and the Right Analytical Engine

Data

“It’s all about the information!” Chances are you’ve heard this before. If you are a Ben Kingsley or Robert Redford fan you may recognize the line from Sneakers (released in 1992). Yes, 1992. Before the World Wide Web!  (Remember, Netscape didn’t launch the first commercially successful Web browser until 1993). Actually it’s always been about the information, or at least the right information – what’s needed to make an informed decision, not just an intuitive one. In many ways the information, the data, has always been there; it’s just that until recently, it wasn’t readily accessible in a timely manner. Today we may not realize how much data is available to us through technology, like the mobile device in your pocket – at 12GB an iPhone 6S is 2,000 times bigger than the 6MB programs IBM developed to monitor the Apollo spacecrafts’ environmental data. (Which demonstrates the reality of Moore’s Law, but that’s another story).  Yet because it’s so easy to create and store large amounts of data today, far too often we’re drowning in data and experiencing information overload. Drowning in Data Chances are you’re reading this in between deleting that last email, before your next Tweet, because the conference call you are on has someone repeating the information you provided yesterday. Bernard Marr, a contributor to Forbes, notes “that more data has been created in the past two years than in the entire previous history of the human race”.  Marr’s piece has at least 19 other eye-opening facts about how much data is becoming available to us, but the one that struck me the most was this one: 0.5%! Imagine the opportunities missed. Just within the financial industry, the possibilities are limitless. For example, what if the transaction patterns of a customer indicated they were buying more and more auto parts as well as making more payments to their local garage (or mechanic). Combined with a recent increase in automatic payroll deposits, might that indicate this customer would be a good prospect for a 0.9% new car financing offer? Or imagine the crises which could be avoided. Think back to February 2016 and the Bangladesh Bank heist where thieves managed to arrange the transfer of $81 million to the Rizal Commercial Banking Corporation in the Philippines. While it’s reasonable to expect existing controls might have detected the theft, it turns out that a “printer error” alerted bank staff in time to forestall an even larger theft, up to $1 billion. The SWIFT interface at the bank is configured to print out a record each time a funds transfer is executed, but on the morning of February 5 the print tray was empty. It took until the next day to get the printer restarted. The New York Federal Reserve Bank had sent queries to the Bank questioning the transfer. What alerted them? A typo. Funds to be sent to the Shalika Foundation were addressed to the “Shalika fandation.” The full implications of this are covered in WIRED Magazine. Analytics, Spotting Problems Before They Become Problems Consider the difference if the bank had the toolset able to flag the anomaly of a misspelled beneficiary in time to generate alerts and hold up the transfers for additional verification. The system was programmed to generate alerts as print-outs. It’s only a small step to have alerts like this sent as an SMS text, or email to the bank’s compliance team, which may have attracted notice sooner. To best extract value from the business data available to you requires two things: An engine and a network. The engine should be like the one in OpenText™ Analytics, designed to perform the data-driven analysis needed. With the OpenText™ Analytics Suite, financial institutions can not only derive data-driven insights to offer value-added solutions to clients, they can also better manage the risk of fraudulent payment instructions, based on insights derived from a client’s payment behavior. For example, with the Bangladesh Bank, analytics might have flagged some of the fraudulent transfers, to Rizal Bank in the Philippines,by correlating the fact that the Rizal accounts were only opened in May 2015, contained only $500 each, and had not been previous beneficiaries. Business Network: Delivering Data to Analytical Engines But the other equally important tool is the network. As trains need tracks, an analytical tools engine needs data (as well as the network to deliver it).   Today more and more of this data needed to extract value comes from outside the enterprise. The Open Text™ Business Network is one way thousands of organizations exchange the data needed to manage their business, and provide the fuel for their analytical engines. For example, suppose a bank wanted to offer their customers the ability to generate ad-hoc reporting through their banking portal. With payment, collection, and reporting data flows delivered through the Open Text Business Network Managed Services, the underlying data would be available for the bank’s analytical engine. Obviously much of the data involved in the examples I’ve provided would be sensitive, confidential, and in need of robust information security controls to keep it safe. That will be the subject of my next post.

Read More

Steel Mill Gains Insight, Makes Better Decisions Through Analytics

analytics

When you think of a steel mill, crucibles of glowing molten metal, giant molds and rollers probably come to mind, not complex financial analysis. But like every other industry nowadays, steel mills – especially ones that specialize in scrap metal recycling – have to keep reviewing their material and production costs and the ever-changing demand for their products, so that they can perform efficiently in a competitive global market. That was the case for North Star BlueScope Steel in Delta, Ohio, which produces hot-rolled steel coils, mostly for the automotive and construction industries. Founded in 1997, the company is the largest scrap steel recycler in Ohio, processing nearly 1.5 million tons of metal a year. To operate profitably, North Star BlueScope examines and analyzes its costs and workflow every month, pulling in data from all over the company, plus external market research. But it was hampered by slow and inefficient technology, centered on Microsoft Excel spreadsheets so large and unwieldy, they took up to 10 minutes just to open. Comparing costs for, say, the period of January through May required North Star staffers to open five separate spreadsheets (one for each month) and combine the information manually. Luckily, the company was already using OpenText™ iHub  as a business intelligence platform for its ERP and asset management systems. It quickly realized iHub would be a much more efficient solution for its monthly costing analysis than the Excel-based manual process. Making Insights Actionable In fact, North Star BlueScope Steel ended up adopting the entire OpenText™ Analytics Suite, including OpenText™ Big Data Analytics (BDA),  whose advanced approach to business intelligence lets it easily access, blend, explore, and analyze data. The results were impressive. The steel company can now analyze a much larger range of its data and get better insights to steer decision-making. For example, it can draw on up to five years’ worth of data in a single, big-picture report, or drill down to a cost-per-minute understanding of mill operations. Now it has a better idea of the grades and mixes of steel products most likely to generate higher profits, and the customers most likely to buy those products. To learn more about how North Star BlueScope Steel is using OpenText Analytics to optimize its operations, plus its plans to embrace the Internet of Things by plugging data streams from its instruments about electricity consumption, material usage, steel prices, and even weather directly into Big Data Analytics, click here.

Read More

Unlock Unstructured Data and Maximize Success in Your Supply Chain

By any standard, a successful business is one that can find new customers, discover new markets, and pursue new revenue streams. But today, succeeding via digital channels, delivering an excellent customer experience, and embracing the digital transformation is the true benchmark. Going digital can increase your agility, and with analytics you can get the level of insight you need to make better decisions. Advances in analytics and content management software are giving companies more power to cross-examine unstructured content, rather than leaving them to rely on intuition and gut instinct. Now, you can quickly identify patterns and offer a new level of visibility into business operations. Look inside your organization to find the value locked within the information you have today. The unstructured data being generated every day inside and outside your business holds targeted, specific intelligence that is unique to your organization and can be used to find the keys to current and future business drivers. Unstructured data like emails, voicemails, written documents, presentations, social media feeds, surveys, legal depositions, web pages, videos, and more offer a rich mine of information that can inform how you do business. Unstructured content, on its own, or paired with structured data, can be put to work to refine your strategy. Predictive and prescriptive analytics offer unprecedented benefits in the digital world. Consider, for instance, the data collected from a bank’s web chat service. Customer service managers cannot read through millions of lines of free text, but ignoring this wealth of information is not an option either. Sophisticated data analytics allow banks to spot and understand trends, like common product complaints or frequently asked questions. They can see what customers are requesting to identify new product categories or business opportunities. Every exchange, every interaction, and all of your content holds opportunity that you can maximize. Making the most of relevant information is a core principle of modern enterprise information management. This includes analyzing unstructured information that is outside the organization, or passed between the company and trading partners across a supply chain or business network. As more companies use business networks, there is an increase in the types and amounts of information flowing across them; things like orders, invoices, delivery information, partner performance metrics, and more. Imagine the value of understanding the detail behind all that data? Imagine the insight it can provide to future planning? And even better: if you could analyze it fast enough to make a difference in what you do today. Here are two common, yet challenging, scenarios and their solutions. Solving challenges in your enterprise Challenges within the business network – A business network was falling behind in serving its customers. They needed to increase speed and efficiency within their supply chain to provide customers with deeper business process support and rich analytics across their entire trading partner ecosystem. With data analytics, the company learned more from their unstructured data—emails and documents—and was able to gain clearer insights into transactions flowing across the network. The new system allows them to identify issues and exceptions earlier, take corrective action, and avoid problems before they occur. Loss of enterprise visibility – A retail organization was having difficulty supporting automatic machine-to-machine data feeds coming from a large number of connected devices within their business network. With the addition of data analytics across unstructured data sources, they gained extensive visibility into the information flowing across their supply chain. Implementing advanced data analytics allowed them to analyze information coming from all connected devices, which afforded a much deeper view into data trends. This intelligence allowed the retailer to streamline their supply chain processes even further. Want to learn more? Explore how you can move forward with your digital transformation; take a look at how OpenText Release 16 enables companies to manage the flow of information in the digital enterprise, from engagement to insight.

Read More