Analytics

MRDM Uses OpenText Analytics to Improve Health Care Outcomes

health care

One of the high-potential use cases for Big Data is to improve health care. Millions of gigabytes of information are generated every day by medical devices, hospitals, pharmacies, specialists, and more. The problem is collecting and sorting through this enormous pool of data to figure out which hospitals, providers, or treatments are the most effective, and putting those insights into the hands of patients, insurers, and other affected parties. Finally, that promise is starting to become reality. A Dutch company, Medical Research Data Management (MRDM), is using OpenText™ Analytics to help the Netherlands’ health care system figure out the most productive and cost-efficient providers and outcomes. The effort to make data collection faster, easier, and more accurate is already paying off. For example, hospitals using MRDM’s OpenText-based analytics and reporting solution for evaluating medical data have been able to reduce complications after colon cancer surgeries by more than half over four years. MRDM chose OpenText Analytics after it realized it needed a robust technical platform that could support more complex, sophisticated medical reporting solutions, and larger volumes of data, than the platform it had been using since it was founded in 2012, open-source BIRT (Business Intelligence Reporting Tool). It rejected many other commercial solutions because they either lacked key functionality or had an inconvenient pricing structure.  (OpenText allows an unlimited number of end users.) The OpenText Analytics components that MRDM is using include a powerful deployment and visualization server that supports a wide range of personalized dashboards with an easy-to-use and intuitive interface. This means MRDM can easily control who sees what. For example, hospitals get reports and visualizations that are refreshed every week with raw data about the outcomes of millions of medical procedures. They can review the findings and pinpoint any inaccurate data before approving them for publication.  Next, MRDM handles release of these reports in customized formats to insurance companies, Dutch government agencies, and patient organizations.  With more detailed information in hand, they can make better decisions leading to better use of limited health care resources. To learn more about this exciting customer success story, including MRDM’s plans to expand throughout Europe and further abroad, click here.

Read More

Artificial Intelligence and EIM

Artifical Intelligence

During a recent visit to Los Angeles, California, I happened to stay at Residence Inn Marriott at LAX. Unable to sustain my hunger pangs in the middle of the night, I ordered some food. And I had the best, and the most surprising experience!. The food arrived quickly and was not carried by a server, but a robot – Wally! Wally is a 3 feet tall robot that moves on wheels, can be programmed for the room number and delivers to the room. More than being served by a robot, I was fascinated by the amount of information processing and intelligence built into the machine to be able to take precise turns, get on the right elevator, reach the correct floor and then the correct door number! I was later told that the number of foot falls and the room service requests have increased since Wally has been put to service. Piqued by my interest, I later found Hilton Hotels also deployed a robot “Connie” as a concierge at Hilton in McLean, VA. Connie can greet the guests and answer their questions about the services, amenities and local attractions. Named after the Hilton chain’s founder Conrad Hilton, Connie is powered by machines delivering Artificial Intelligence (AI). Robots delivering a great experience to hotel guests are an example of how Artificial Intelligence coupled with devices can perform tasks that are repeatable, process-oriented, rule-based operations.  AI works on the principle of analyzing data, identifying patterns and turning data into information that may be useful in decision making. This form of AI has been very popular and has been in existence for a long time. Its populist nature and long term existence stems from the underlying principle that it is rules based and can only predict from a fixed set of probably outcomes, based on the information already provided. This form of AI was initially seen in 1997 when IBM’s Deep Blue won a game against Garry Kasparov – Chess Grand Master. Though the computer was retired soon after, the concept of a machine adapting to a large set of rules and able to make decisions became a reality. Later, Apple’s Siri, Google’s Google Now, Microsoft’s Cortana and Amazon’s Alexa enhanced the powers of AI and entered our daily lives. This form of intelligence which is primarily ability to compute is known as Applied AI or Weak AI or Narrow AI. This is developed quickly to solve a purpose. Amazon, Apple, Google, Microsoft have yet not ended their quest in being your own personal assistant. They are aiming to be able to understand your emotions when you talk to them, which requires a context in which the data is provided to them. And with this, they want to develop the ability to be able to negotiate decisions for you. Tesla and Google have already tried to take it to the next level by releasing autonomous auto driving software and devices. AI in the true sense. This form of AI is known as the General Purpose Artificial Intelligence. AI is exciting and is growing in presence and applications every day. The stories from Sci-Fi are becoming reality sooner than later. However, at the heart of its growth lies the importance of abundance of data. Data that can be managed, mined, analyzed and processed to get information. Enterprise Information Management has an important role to play in the growth of AI in enterprises. With its ability to store, manage and present data, EIM is only bridging the gap today.

Read More

AI-Enhanced Search Goes Further Still With Decisiv 8.0

Decisiv

OpenText™ Decisiv extends enterprise search with the power of unsupervised machine learning, a species of AI. I recently blogged about how Decisiv’s machine learning takes search further, helping users find what they’re looking for, even when they’re not sure what that is.   Now, Decisiv 8.0—part of OpenText™ Release 16 and EP2—takes the reach and depth of AI-enhanced search even further. Take Search to More Places In addition to being embedded in both internal and external SharePoint portals, Decisiv has long been integrated with OpenText eDOCS, enabling law firms to combine AI-enhanced search with sophisticated document management. Decisiv also connects to OpenText™ Content Suite, Documentum, and a wide range of other sources to crawl data for federated search. Decisiv 8.0 expands these integrations with the introduction of a new REST API. With this release, administrators can efficiently embed Decisiv’s powerful search capabilities into an even broader range of applications, such as conflicts systems, project management, CRM, and mobile-optimized search interfaces. Take Search Deeper Other enhancements in Decisiv 8.0 include a new Relevancy Analysis display, which shows researchers precisely why their search results received the rankings they did and even lets them compare the predicted relevance of selected documents. This enhancement helps researchers to prioritize their research more effectively and administrators to understand how the engine is functioning and being leveraged across the enterprise. New Open Smart Filter display options also help researchers benefit from using metadata filters to zero in on useful content. By opting to automatically show the top values in each filter category (left side of the screen below), administrators can educate researchers on how to use filters for faster access to the content they need, without training or explanation. Decisiv Goes Beyond Legal Decisiv’s premier law firm customer base leaves some with the impression that Decisiv is just for legal teams. In fact, Decisiv’s machine learning isn’t limited to any specific industry or use case. That’s because it analyzes unstructured content on a statistical basis, rather than taxonomical. (Surprisingly, sometimes lawyers do lead the way on versatile technology.) …and Decisiv Goes to Toronto Learn more about Decisiv Search and our other award-winning Discovery Suite products at Enterprise World this July. You’ll hear from top corporate, law firm, and government customers how their enterprises are leveraging OpenText’s machine learning to discover what matters in their data.

Read More

Step-by-step Guide: Integrate Market Leading Analytics Engines With InfoArchive

Analytics

Gaining further insights from your data is a must-have in today’s enterprise. Whether you call it analytics, data mining, business intelligence or big data – your task will still be to gain further insights from the massive heap of data. But what if your data has already been archived? What if your data now resides in your long term archiving platform? Will you be able to use it in all analytics scenarios? Let me demonstrate how easily it can be done if your archiving platform is OpenText™ InfoArchive (IA). A customer recently requested a demonstration of integration with analytics/BI tools in a workshop we were running. The question asked was about the possibilities in InfoArchive to integrate with third party analytics engines? The answer is – everything in InfoArchive is exposed to an outside world in the form of REST API. When I say everything I mean every action, configuration object, search screen – literally everything. So we decided to use REST API for the analytics integration demo to the customer. What Analytics/BI tool to pick? Quick look at the Gartner Magic Quadrant has some hints. I’ve been using Tableau with InfoArchive in the past so let’s look at another option in the Gartner list: Qlik. OpenText™ Analytics (or it’s open source companion BIRT) is my other choice – for obvious reasons. Let’s get our hands dirty now! Qlik Qlik Sense Desktop seems to have a simple UI but there are some powerful configuration options hidden behind the nice façade. In Qlik to query a third party source simply open the Data load editor and create a new connection. Pick Qlik REST Connector and configure it. The connection configuration screen enables you to specify the URL of the request, request body and all necessary header values. All you need for a quick test. Now that the connection is configured you’ll have to tell Qlik how to process the IA REST response. Click the “Select data” button in your connection and Qlik will connect to InfoArchive, execute the query and show you the JSON results in a tree and table browser. All you need to do is to pick the column names that you want Qlik to process as shown below: Since the IA REST response columns are stored in name-value elements we have to transpose the data. This can be easily done with 20 lines of code in the Qlik data connection: Table3: Generic LOAD * Resident [columns]; TradesTable: LOAD Distinct [__KEY_rows] Resident [columns];   FOR i = 0 to NoOfTables()   TableList:   LOAD TableName($(i)) as Tablename AUTOGENERATE 1   WHERE WildMatch(TableName($(i)), 'Table3.*'); NEXT i FOR i = 1 to FieldValueCount('Tablename')   LET vTable = FieldValue('Tablename', $(i));   LEFT JOIN (TradesTable) LOAD * RESIDENT [$(vTable)];   DROP TABLE [$(vTable)]; NEXT i We’re almost done. Let’s visualize the data in a nice report now. Select “Create new sheet” on the Qlik “App overview” page and now add tables and charts to present your data. My example can be seen below: Just click “Done” on the top of the screen and you’ll be able to see the end user view: browse the data, filter it and all charts will dynamically update based on your selection. Job done! Continue reading on Page 2 by clicking below.

Read More

Find More Knowledge in Your Information at Enterprise World 2017

If your office is like most, it’s got millions of gigabytes full of information stashed away on computer hard drives – and maybe even file cabinets full of paper! Every single business process generates enormous data streams – not just your ERP and CRM systems, but payroll, hiring, even ordering lunch from the caterer for those regular Thursday meetings. So wouldn’t you like to find out how you can leverage the knowledge already contained in all that information? And derive more value from your existing systems of record? Come to OpenText Enterprise World this July and you’ll hear how organizations in every industry are using the cutting-edge techniques of OpenText™ Analytics to derive more value from their data – including self-service access, prediction, and modeling, and innovative techniques to get insights more easily out of unstructured data (aka the stuff you use most of the time: documents, messages, and social media). We are excited to showcase OpenText Magellan at this year’s conference and  show you the impact it will have in helping analyze massive pools of data and harness the power of your information. We’ll also preview the roadmap of new developments in the OpenText Analytics Suite. Helping Our Human Brains Navigate Big Data Thanks to cheap and abundant technology, we have so much data at our disposal – creating up to 2.5 exabytes a day by some estimates – that the sheer amount is overwhelming. In fact, it’s more than our human brains can make sense of.  “It’s difficult to make decisions, because that much data is more than we can make sense of, cognitively,” says Lalith Subramanian, VP of Engineering for Analytics at OpenText. “That’s where machine learning and smart analytics come into the picture,” he explains. “We intend to do for Big Data what earlier reporting software companies tried to do for business intelligence – simplify it and make it less daunting, so that reasonably competent people can do powerful things with Big Data.” Expect plenty of demos and use cases, including a look at our predictions from last year’s Enterprise World about who would die on Season 6 of “Game of Thrones,” and new prognostications for Season 7. Do-It-Yourself Analytics Provisioning Meanwhile, OpenText also plans to unveil enhancements to the Analytics Suite that will help give users even more power to blend and explore their own data. OpenText™ iHub , our enterprise-grade deployment server for interactive analytics at the core of the Analytics Suite, is adding the ability to let non-technical users provision their own data for analysis, rather than relying on IT, Subramanian says. They can freely blend and visualize data from multiple sources. These sources will soon include not just structured data, such as spreadsheets and prepared database files or ERP records, but unstructured data including text documents, web content, and social media streams. That’s because new algorithms to digest and make sense of language and text are getting infused into both OpenText Analytics and OpenText™ InfoFusion, an important component in the content analytics process. With OpenText™ Big Data Analytics, users will be able to apply these new, customized algorithms to the self-provisioned data of many types. At the same time, InfoFusion is adding adapters to pull content off Twitter feeds and web sites automatically. The Word on the Street One use case for this combination of OpenText InfoFusion and the Analytics Suite is to research topics live, as they’re being discussed online, Subramanian adds. “You could set it up so that it goes out as often as desired to see the latest things related to whatever person or topic you’re interested in. Let’s say OpenText Corporation – then it’ll go look for news coverage about OpenText plus the press releases we publish, plus Tweets by and about us, all aggregated together, then analyzed by source, sub-topic, emotional tone (positive, negative, or neutral), as we’ve demonstrated with our content analytics-based Election Tracker. Over time we’d add more and more (external information) sources.” Keep in mind, politicians, pundits, and merchants have been listening to “the word on the street” for generations. But that used to require armies of interns to go through all the mail, voice messages, conversations, or Letters to the Editor – and the net result was score-keeping (“yea” vs. “nay” opinions) or subjective impressions. Now these opinions, like every other aspect of the digital economy, can be recorded and analyzed by software that’s objective and tireless. And they can add up to insights that enrich your business intelligence for better decision-making. To see and hear all of this in person, don’t miss Enterprise World in Toronto, July 10-13. Click here for more information and to register.

Read More

What a Difference a Day Makes: Get up to Speed on OpenText Analytics in 7 Hours

Analytics Workshop

One of the biggest divides in the work world these days is between people with software skills and “business users” – the ones who can work their magic on data and make it tell stories, and… well, everyone else (those folks who often have to go hat in hand to IT, or their department’s digital guru, and ask them to crunch the numbers or build them a report). But that divide is eroding with help from OpenText™ Analytics. With just a few hours’ training, you can go from absolute beginner to creating sophisticated data visualizations and interactive reports that reveal new insights in your data. And if you’re within travel distance of Washington, D.C., have we got an offer for you! Join OpenText Analytics Wednesday, May 10, at The Ritz-Carlton, Arlington, VA for a free one-day interactive, hands-on analytics workshop that dives deep into our enterprise-class tools for designing, deploying, and displaying visually appealing information applications. During this workshop, you’ll gain insights from our technical experts Dan Melcher and Geff Vitale. You’ll learn how OpenText Analytics can provide valuable insights into customers, processes, and operations, improving how you engage and do business. We recently added a bonus session in the afternoon, on embedding secure analytics into your own applications. Here, you’ll see why many companies use OpenText™ iHub to deliver embedded analytics, either to customers (e.g. through a bank’s portal) or as an OEM app vendor, embedding our enterprise-grade analytics on a white-label basis to speed up the development process. Here’s what to expect in each segment: Learning the Basics of OpenText Analytics Suite Get introduced to the functions and use cases of OpenText Analytics Suite, including basic data visualizations and embedded analytics. Start creating your own interactive reports and consider what this ability could do for your own business. Analyze the Customer You’ll learn about the advanced and predictive analysis features of the Analytics Suite by doing a walk-through of a customer analysis scenario. Begin segmenting customer demographics, discovering cross-sell opportunities, and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. Drive Engagement with Dashboards A self-service scenario where you create and share dashboards completely from scratch will introduce the dashboarding and reporting features of OpenText Analytics. See how easy it is to assemble interactive data visualizations that allow users to filter, pivot, explore, and display the information any way they wish. Embed Secure Analytics with iHub After the lunch break, learn how to enable secure analytics in your application, whether as a SaaS or on-premise deployment. OpenText answers the challenge with uncompromising extensibility, scalability, and reliability. Who should attend? IT directors and managers, information technology managers, business analysts, product managers and architects Team members who define, design, and deploy applications that use data visualizations, reports, dashboards, and analytics to engage their audience Consultants who help clients evaluate and implement the right technology to deliver data visualizations, reports, dashboards and analytics at scale If you are modernizing your business with Big Data and want your entire organization to benefit from compelling data visualizations, interactive reports and dashboards – then don’t miss this free, hands-on workshop! For more details or to sign up, click here. And if you’d really like to dive into the many facets of OpenText Analytics, along with Magellan, our next-generation cognitive platform, and the wide world of Enterprise Information Management, don’t miss Enterprise World, July 10-13 in Toronto.  For more information, click here.

Read More

Enterprise World: Analytics Workshop Takes You From Zero to Power User in 3 Hours

Analytics Workshop

One of the great things about OpenText™ Analytics Suite is its ease of use. In less than three hours, you can go from being an absolute beginner to creating dynamic, interactive, visually appealing reports and dashboards. That’s even enough time to become a “citizen data scientist,” using the advanced functionalities of our Analytics Suite to perform sophisticated market segmentation and make predictions of likely outcomes and customer behavior. So by popular demand, we’re bringing back our Hands-On Analytics Workshop at Enterprise World 2017, July 10-13 in Toronto. The workshop comprises three 50-minute sessions on Tuesday afternoon, July 11. Just bring your laptop, connect to our server, and get started with a personalized learning experience. You can attend the sessions individually – but for the full experience, you’ll want to attend all three. Learn how businesses and nonprofits use OpenText Analytics to better engage customers, improve process and modernize their operations by providing self-service analytics to a wide range of users across a variety of use cases. This three-part workshop is also valuable for users of OpenText™ Process Suite, Experience Suite, Content Suite, and Business Network. Here’s what to expect in each segment: 1. ANA-200: Learning the Basics of OpenText Analytics Suite This demo-packed session serves as an introduction to the series, and will arm you with all you need to know about the OpenText Analytics Suite, including use cases, benefits and customer successes, as well as a deep dive into product features and functionality. Through a series of sample application demonstrations, you will learn how OpenText Analytics can meet any analysis requirement or use case, including yours! This session serves as a perfect lead-in for the next 2 sessions: ANA-201 and ANA-202. 2. ANA-201 Hands-On Workshop: Using Customer Analytics to Improve Engagement This hands-on session will introduce the advanced and predictive analysis features of the Analytics Suite by walking you through a customer analysis scenario using live product. Connect from your own laptop to our server and begin segmenting customer demographics, discovering cross-sell opportunities and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. You will learn how OpenText Analytics can provide valuable insights into customers, processes and operations, improving how you engage and do business. 3. ANA-202 Hands-On Workshop: Working with Dashboards to Empower Your Business Users This hands-on session will introduce the dashboarding and reporting features of OpenText Analytics by walking you through a self-service scenario where you create and share dashboards completely from scratch. Connect from your laptop to our server and see just how easy it is to assemble interactive data visualizations that allow users to filter and pivot the information any way they wish, in just a matter of minutes! You will learn how OpenText makes it easy for any user to analyze and share information, regardless of their technical skill. Of course, we have plenty of other interesting sessions about OpenText Analytics planned for Enterprise World. Get a sneak peek at product road maps, exciting new features (including developments in Magellan, our cognitive software platform), and innovative customer use cases for the OpenText Analytics Suite. Plus, get tips from experts, immerse yourself in technical details, and network with peers, and enjoy great entertainment. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

Knorr-Bremse Keeps the Wheels Rolling with Predictive Maintenance Powered by OpenText Analytics

diagnosis

Trains carry billions of passengers and tons of freight a year worldwide, so making sure their brakes work properly is no mere routine maintenance check. Helping rail transport operate more safely and efficiently is top-of-mind for the Knorr-Bremse Group, based in Munich, Germany. The company is a leading manufacturer of brakes and other components of trains, metro cars, and buses. These components include sophisticated programming to optimize operations and diagnosis. The company developed iCOM, an Internet of Things-based platform for automated maintenance and diagnosis.  Through onboard sensors, iCOM (Intelligent Condition Oriented Maintenance) gathers data wirelessly from more than 30 systems throughout a train car, including brakes, doors, wipers, heating and ventilation.  These IoT sensors continually report back conditions such as temperature, pressure, energy generation, duration of use, and error conditions. iCOM analyzes the data to recommend condition-based, rather than static, scheduled maintenance. This means any performance issue can be identified before it becomes a serious safety problem or a more costly repair or replacement. For iCOM customers, this means better safety, more uptime, improved energy efficiency,  and lower operating costs for their rail fleets.   As more customers adopted the solution, they began demanding more sophisticated analysis (to see when, where, and even why an event happens), more visually engaging displays, and the ability to build their own reports without relying on IT. Knorr-Bremse knew it needed to upgrade the technology it was using for analysis and reporting on the vast quantities of data that the iCOM solution gathers, replacing open-source BIRT (Business Intelligence and Reporting Tools). A new analytics platform would also have to be scalable enough to cope with the enormous volumes of real-time data that thousands of sensors across a rail fleet continually generate. Further, Knorr-Bremse needed an analytics solution it could develop, embed into the overall iCOM platform, and bring to market with the least possible time and coding effort. The answer to these challenges  was OpenText™ Analytics Suite. “Due  to the easy-to-use interface of OpenText Analytics, our develop­ers were quickly productive in developing the analytics and report­ing aspects of iCOM. iCOM is based on Java and consequently it has been very easy to integrate and embed the OpenText Analytics platform [into it]. It is not just about shortening the time to develop, though. The results have to look good  and with OpenText, they do,” says Martin Steffens, the iCOM digital platform project manager and software architect at Knorr-Bremse. To learn more about Knorr-Bremse’s success with OpenText Analytics, including a potential drop of up to 20 percent in maintenance costs, click here.

Read More

Discovery Rises at Enterprise World

This summer will mark a full year since Recommind became OpenText Discovery, and we’re preparing to ring in that anniversary at our biggest conference yet: Enterprise World 2017! We’re inviting all of our clients, partners, and industry peers to join us for three days of engaging roundtables, interactive product demos, Q&A with experts, a keynote from none other than Wayne Gretzky, and—of course—the latest updates, roadmaps, and visions from OpenText leaders. Here’s a sneak peek of what to expect from OpenText Discovery’s track: The Future of Enterprise Discovery. We’ll be talking at a strategic and product-roadmap level about unifying Enterprise Information Management (EIM) with eDiscovery. New data source connectors, earlier use of analytics, and even more flexible machine learning applications are on the way! Introduction to eDiscovery. Our vision for the future of eDiscovery is broader than the legal department, and we’re spreading that message with sessions tailored for IT and data security professionals that want to know more about the legal discovery process and data analysis techniques. Why Legal is Leading the Way on AI. Our machine learning technology was the first to receive judicial approval for legal document review, and in the years since, we’ve continued to innovate, develop, and expand machine learning techniques and workflows. In our sessions, we’ll highlight current and future use cases for AI for investigations, compliance, due diligence, and more. Contract Analysis and Search. We’ll also have sessions focused exclusively on innovations in enterprise search and financial contract analysis. Join experts to learn about the future of predictive research technology and the latest data models for derivative trading optimization and compliance. Our lineup of sessions is well underway and we’ve got an exciting roster of corporate, academic, government, and law firm experts including a special keynote speaker on the evolving prominence of technology in law. Register here for EW 2017  with promo code EW17TOR for 40% off and we’ll see you in Toronto!

Read More

From KPIs to Smart Slackbots, Hot New Analytics Developments at OpenText Enterprise World 2017

Innovation never sleeps in the OpenText Analytics group, where we’re working hard to put together great presentations for Enterprise World 2017, July 10-13 in Toronto. We offer a sneak peek at product road maps, exciting new features and innovative customer use cases for the OpenText Analytics Suite. Plus, you can get hands-on experience building custom-tailored apps, get tips from experts, immerse yourself in technical details, and network with peers. Learn about: Reporting and dashboards with appealing, easy-to-create visual interfaces Self-service analytics to empower your internal users and customers and help you make better decisions Best-of-breed tools to crunch massive Big Data sets and derive insights you never could have before Cognitive computing and machine learning Capturing the Voice of the Customer Structured and unstructured content analytics that can unlock the hidden value in your documents, chats, and social media feeds. Our presentations include: Industry-focused sessions including OpenText Analytics for Financial Services. Hear how we add value in common use cases within the financial industry, including customer analytics, online consumer banking, and corporate treasury services. Showcases of hot new functions like Creating Intelligent Analytic Bots for Slack (the popular online collaboration tool). Personalized training in OpenText Analytics. Our three-part Hands-On Analytics Workshop can get you from an absolute beginner to competent user, harnessing the power of Big Data for better insights and build compelling data visualizations and interactive reports and dashboards. Technical deep dives with popular tools such as Business Performance Management Analytics. We’ll show you how to use OpenText Analytics to measure KPIs and performance-driven objectives, including the popular Balanced Scorecard methodology. A fascinating use case: Financial Contract Analysis with Perceptiv. See how customers are using our advanced analytics tool to capture, organize, and extract relevance from over 200 fields in half a million financial derivative contracts. How Many Lawyers Does It Take to Analyze an Email Server? Learn how lawyers and investigators are using our cutting-edge OpenText Discovery technology, including email  mapping, concept-based search, and machine learning, to find the “smoking guns” in thousands of pages of email. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

For Usable Insights, You Need Both Information and the Right Analytical Engine

Data

“It’s all about the information!” Chances are you’ve heard this before. If you are a Ben Kingsley or Robert Redford fan you may recognize the line from Sneakers (released in 1992). Yes, 1992. Before the World Wide Web!  (Remember, Netscape didn’t launch the first commercially successful Web browser until 1993). Actually it’s always been about the information, or at least the right information – what’s needed to make an informed decision, not just an intuitive one. In many ways the information, the data, has always been there; it’s just that until recently, it wasn’t readily accessible in a timely manner. Today we may not realize how much data is available to us through technology, like the mobile device in your pocket – at 12GB an iPhone 6S is 2,000 times bigger than the 6MB programs IBM developed to monitor the Apollo spacecrafts’ environmental data. (Which demonstrates the reality of Moore’s Law, but that’s another story).  Yet because it’s so easy to create and store large amounts of data today, far too often we’re drowning in data and experiencing information overload. Drowning in Data Chances are you’re reading this in between deleting that last email, before your next Tweet, because the conference call you are on has someone repeating the information you provided yesterday. Bernard Marr, a contributor to Forbes, notes “that more data has been created in the past two years than in the entire previous history of the human race”.  Marr’s piece has at least 19 other eye-opening facts about how much data is becoming available to us, but the one that struck me the most was this one: 0.5%! Imagine the opportunities missed. Just within the financial industry, the possibilities are limitless. For example, what if the transaction patterns of a customer indicated they were buying more and more auto parts as well as making more payments to their local garage (or mechanic). Combined with a recent increase in automatic payroll deposits, might that indicate this customer would be a good prospect for a 0.9% new car financing offer? Or imagine the crises which could be avoided. Think back to February 2016 and the Bangladesh Bank heist where thieves managed to arrange the transfer of $81 million to the Rizal Commercial Banking Corporation in the Philippines. While it’s reasonable to expect existing controls might have detected the theft, it turns out that a “printer error” alerted bank staff in time to forestall an even larger theft, up to $1 billion. The SWIFT interface at the bank is configured to print out a record each time a funds transfer is executed, but on the morning of February 5 the print tray was empty. It took until the next day to get the printer restarted. The New York Federal Reserve Bank had sent queries to the Bank questioning the transfer. What alerted them? A typo. Funds to be sent to the Shalika Foundation were addressed to the “Shalika fandation.” The full implications of this are covered in WIRED Magazine. Analytics, Spotting Problems Before They Become Problems Consider the difference if the bank had the toolset able to flag the anomaly of a misspelled beneficiary in time to generate alerts and hold up the transfers for additional verification. The system was programmed to generate alerts as print-outs. It’s only a small step to have alerts like this sent as an SMS text, or email to the bank’s compliance team, which may have attracted notice sooner. To best extract value from the business data available to you requires two things: An engine and a network. The engine should be like the one in OpenText™ Analytics, designed to perform the data-driven analysis needed. With the OpenText™ Analytics Suite, financial institutions can not only derive data-driven insights to offer value-added solutions to clients, they can also better manage the risk of fraudulent payment instructions, based on insights derived from a client’s payment behavior. For example, with the Bangladesh Bank, analytics might have flagged some of the fraudulent transfers, to Rizal Bank in the Philippines,by correlating the fact that the Rizal accounts were only opened in May 2015, contained only $500 each, and had not been previous beneficiaries. Business Network: Delivering Data to Analytical Engines But the other equally important tool is the network. As trains need tracks, an analytical tools engine needs data (as well as the network to deliver it).   Today more and more of this data needed to extract value comes from outside the enterprise. The Open Text™ Business Network is one way thousands of organizations exchange the data needed to manage their business, and provide the fuel for their analytical engines. For example, suppose a bank wanted to offer their customers the ability to generate ad-hoc reporting through their banking portal. With payment, collection, and reporting data flows delivered through the Open Text Business Network Managed Services, the underlying data would be available for the bank’s analytical engine. Obviously much of the data involved in the examples I’ve provided would be sensitive, confidential, and in need of robust information security controls to keep it safe. That will be the subject of my next post.

Read More

Steel Mill Gains Insight, Makes Better Decisions Through Analytics

analytics

When you think of a steel mill, crucibles of glowing molten metal, giant molds and rollers probably come to mind, not complex financial analysis. But like every other industry nowadays, steel mills – especially ones that specialize in scrap metal recycling – have to keep reviewing their material and production costs and the ever-changing demand for their products, so that they can perform efficiently in a competitive global market. That was the case for North Star BlueScope Steel in Delta, Ohio, which produces hot-rolled steel coils, mostly for the automotive and construction industries. Founded in 1997, the company is the largest scrap steel recycler in Ohio, processing nearly 1.5 million tons of metal a year. To operate profitably, North Star BlueScope examines and analyzes its costs and workflow every month, pulling in data from all over the company, plus external market research. But it was hampered by slow and inefficient technology, centered on Microsoft Excel spreadsheets so large and unwieldy, they took up to 10 minutes just to open. Comparing costs for, say, the period of January through May required North Star staffers to open five separate spreadsheets (one for each month) and combine the information manually. Luckily, the company was already using OpenText™ iHub  as a business intelligence platform for its ERP and asset management systems. It quickly realized iHub would be a much more efficient solution for its monthly costing analysis than the Excel-based manual process. Making Insights Actionable In fact, North Star BlueScope Steel ended up adopting the entire OpenText™ Analytics Suite, including OpenText™ Big Data Analytics (BDA),  whose advanced approach to business intelligence lets it easily access, blend, explore, and analyze data. The results were impressive. The steel company can now analyze a much larger range of its data and get better insights to steer decision-making. For example, it can draw on up to five years’ worth of data in a single, big-picture report, or drill down to a cost-per-minute understanding of mill operations. Now it has a better idea of the grades and mixes of steel products most likely to generate higher profits, and the customers most likely to buy those products. To learn more about how North Star BlueScope Steel is using OpenText Analytics to optimize its operations, plus its plans to embrace the Internet of Things by plugging data streams from its instruments about electricity consumption, material usage, steel prices, and even weather directly into Big Data Analytics, click here.

Read More

Unlock Unstructured Data and Maximize Success in Your Supply Chain

By any standard, a successful business is one that can find new customers, discover new markets, and pursue new revenue streams. But today, succeeding via digital channels, delivering an excellent customer experience, and embracing the digital transformation is the true benchmark. Going digital can increase your agility, and with analytics you can get the level of insight you need to make better decisions. Advances in analytics and content management software are giving companies more power to cross-examine unstructured content, rather than leaving them to rely on intuition and gut instinct. Now, you can quickly identify patterns and offer a new level of visibility into business operations. Look inside your organization to find the value locked within the information you have today. The unstructured data being generated every day inside and outside your business holds targeted, specific intelligence that is unique to your organization and can be used to find the keys to current and future business drivers. Unstructured data like emails, voicemails, written documents, presentations, social media feeds, surveys, legal depositions, web pages, videos, and more offer a rich mine of information that can inform how you do business. Unstructured content, on its own, or paired with structured data, can be put to work to refine your strategy. Predictive and prescriptive analytics offer unprecedented benefits in the digital world. Consider, for instance, the data collected from a bank’s web chat service. Customer service managers cannot read through millions of lines of free text, but ignoring this wealth of information is not an option either. Sophisticated data analytics allow banks to spot and understand trends, like common product complaints or frequently asked questions. They can see what customers are requesting to identify new product categories or business opportunities. Every exchange, every interaction, and all of your content holds opportunity that you can maximize. Making the most of relevant information is a core principle of modern enterprise information management. This includes analyzing unstructured information that is outside the organization, or passed between the company and trading partners across a supply chain or business network. As more companies use business networks, there is an increase in the types and amounts of information flowing across them; things like orders, invoices, delivery information, partner performance metrics, and more. Imagine the value of understanding the detail behind all that data? Imagine the insight it can provide to future planning? And even better: if you could analyze it fast enough to make a difference in what you do today. Here are two common, yet challenging, scenarios and their solutions. Solving challenges in your enterprise Challenges within the business network – A business network was falling behind in serving its customers. They needed to increase speed and efficiency within their supply chain to provide customers with deeper business process support and rich analytics across their entire trading partner ecosystem. With data analytics, the company learned more from their unstructured data—emails and documents—and was able to gain clearer insights into transactions flowing across the network. The new system allows them to identify issues and exceptions earlier, take corrective action, and avoid problems before they occur. Loss of enterprise visibility – A retail organization was having difficulty supporting automatic machine-to-machine data feeds coming from a large number of connected devices within their business network. With the addition of data analytics across unstructured data sources, they gained extensive visibility into the information flowing across their supply chain. Implementing advanced data analytics allowed them to analyze information coming from all connected devices, which afforded a much deeper view into data trends. This intelligence allowed the retailer to streamline their supply chain processes even further. Want to learn more? Explore how you can move forward with your digital transformation; take a look at how OpenText Release 16 enables companies to manage the flow of information in the digital enterprise, from engagement to insight.

Read More

Westpac Bank Automates and Speeds Up Regulatory Reporting with OpenText Analytics

Westpac

When Westpac Banking Corporation was founded in 1817 in a small waterfront settlement in Australia, banking was rudimentary. Records were kept with quill pens in leather-bound ledgers: Pounds, shillings, and pence into the cashbox; pounds, shillings, and pence out.  (Until a cashier ran off with half the fledgling bank’s capital in 1821, that is.) Now, exactly 200 years after Westpac’s parent company opened its doors, it’s not only the oldest bank in Australia but the second-largest, with 13 million customers worldwide and over A$812 billion under management. Every year it does more and more business in China, Hong Kong, and other Asia-Pacific nations. The downside to this expansion is: More forms to fill out – managing the electronic and physical flow of cash across national borders is highly regulated, requiring prompt and detailed reports of transactions, delivered in different formats for each country and agency that oversees various aspects of Westpac’s business. These reports require information from multiple sources throughout the company. Until recently, pulling out and consolidating all these complex pieces of data was a manual, slow, labor-intensive process that often generated data errors, according to Craig Chu, Westpac’s CIO for Asia.  The bank knew there had to be a better way to meet its regulatory requirements – but one that wouldn’t create its own new IT burden. A successful proof of concept led to Westpac adopting an information management and reporting solution from OpenText™ Analytics. To hear Chu explain how Westpac streamlined and automated its reporting process with OpenText™ iHub and Big Data Analytics, and all the benefits his company has realized, check out this short video showcasing this success story.  (Spoiler alert: Self-service information access empowers customers and employees.) If you’d like to learn more about what the OpenText Analytics Suite could do for your organization, click here.

Read More

Post-Election Score: Pundits 0, Election Tracker 1

election tracker

In the midst of post-election second-guessing over why so many polls and pundits failed to predict Donald Trump’s win, there was one clear success story: OpenText™ Election Tracker. ElectionTracker, the web app that analyzed news coverage of the Presidential race from over 200 media outlets worldwide for topics and sentiment, was a great showcase for the speed, robustness, and scalability of the OpenText™ Information Hub (iHub) technical platform it was built on. With demands for more than 54,000 graphic visualizations an hour on Election Day, it ramped up quickly with no downtime, performance you’d expect from OpenText™ Analytics. Moreover, the tracker’s value in revealing patterns in the tone and extent of campaign news content provided valuable extra insight into voter concerns that pre-election polls didn’t uncover, and that insight didn’t just end after Election Day. It’s just one in the series of proofs-of-concept on how our unstructured data analytics solutions shine at analyzing text and other unstructured data. They bring to the surface previously hard-to-see patterns in any kind of content stream – social media, customer comments, healthcare service ratings, and much more. OpenText Analytics solutions analyze these patterns and bring them to life in attractive, easy-to-understand, interactive visualizations. Also if some unforeseen event ends up generating millions of unexpected clicks, Tweets, or comments that you need to sift through quickly, iHub offers the power and reliability to handle billions of data points on the fly. Hello, Surprise Visitors! Speaking of unforeseen events: Some of the Election Tracker traffic was due to mistaken identity. On Election Day, so many people were searching online for sites with live tracking of state-by-state election results that electiontracker.us became one of the top results on Google that day. At peak demand, the site was getting nearly 8,000 hits an hour, more than 100 times the usual traffic. Senior Director of Technical Marketing Mark Gamble, an Election Tracker evangelist, was the site administrator that day. “On November 8 at around 6 a.m. I was about to get on a flight when I started getting e-mail alerts from our cloud platform provider that the Election Tracker infrastructure was getting hammered from all those Google searches. I’d resolve that alert, and another one would pop up.” “We had it running at just two nodes of our four-node cluster, to keep day-to-day operating costs down. Our technical team said, ‘Let’s spin up the other two nodes.’  That worked while I was changing planes in Detroit. But when I got off, my phone lit up again: Demand was still climbing. It was just unprecedented traffic.” “So we had our cloud provider double the number of cores, or CPUs, that run on each node. And that kept up with demand. The site took a bit longer to load, but it never once crashed. That’s the advantage of running in the cloud – you can turn up the volume on the fly.” “Of course, the flexibility of our iHub-based platform is unique. All the cloud resources in the world won’t help you if you can’t quickly and efficiently take advantage of them.” Easy Visualizing Demand on the site was heightened by the Election Tracker’s live, interactive interface. That’s intentional, because OpenText Analytics solutions encourage users to take a self-service approach to exploring their data. “It’s not just a series of static pages,” explains Clement Wong, Director of Analytics On-Demand Operations. “The infographics are live and change as the viewer adjusts the parameters.  With each page hit, a visitor was asking for an average of seven visualizations. That means the interface is constantly issuing additional calls back and forth  to the database and the analytic engine. iHub has the robustness to support that.” (In fact, at peak demand the Tracker was creating more than 15 new visualizations every second.)” “Some of the reporters who wrote about Election Tracker told us how much they enjoyed being able to go in and do comparisons on their own,” Gamble says. “For example, look at how much coverage each candidate got over the past 90 days, compared to the last 7 days, then filter for only non-U.S. news sources, or drill down to specific topics like healthcare or foreign policy. That way they didn’t have to look at static figures and then contact us to interpret for them; the application granted the autonomy to draw their own conclusions.” Great Fit for Embedding “The self-service aspect is one reason that iHub and other OpenText Analytics solutions are a great fit for embedding into other web sites (use cases such as bank statements or utility usage)”, Gamble adds. “First of all, an effective embedded analytic application has to be highly interactive and informative, so people want to use it – not just look at ready-made pages, but feel comfortable exploring on their own.” “Embedded analytics also requires seamless integration with the underlying data sources so the visuals are integral and indistinguishable from the rest of the site, and it needs high scalability to keep up with growing usage.” What’s Next? The iHub/InfoFusion integration underlying the Election Tracker is already being used in other proofs-of-concept. One is helping consumer goods manufacturers analyze customers’ social media streams for their sentiments about the product and needs or concerns. “If you think of Election Tracker as the Voice of the Media, the logical next step is Voice of the Customer,” Gamble says. The Election Tracker is headlining the OpenText Innovation Tour, which just wrapped up in Asia and resumes in spring 2017.

Read More

Telco Accessibility 101: What’s Now Covered by U.S. Legislation

telco accessibility

In a word, everything. Name a telecommunications product or service and chances are it has a legal requirement to comply with federal accessibility laws. Let’s see… Mobile connectivity services for smartphones, tablets, and computers? Check Smartphones, tablets, and computers? Check Internet services (e.g., cable, satellite)? Check Television services (e.g., cable, satellite, broadcast)? Check Televisions, radios, DVD/Blu-ray players, DVRs, and on-demand video devices? Check Email, texting, and other text-based communication? Check VoIP communications and online video conferencing? Check Fixed-line phone services? Check Fixed-line telephones, modems, answering machines, and fax machines? Check Two tin cans attached by a string? Check All of these products and services are covered by U.S. accessibility legislation (except the cans and string). What laws are we talking about here? Mainly Section 255 of the Telecommunications Act of 1996, for products and services that existed before 1996, and the Twenty-­First Century Communications and Video Accessibility Act (CVAA) of 2010, which picked up where Section 255 left off, defining accessibility regulations for broadband-enabled advanced communications services. Web accessibility legislation, while not telco-specific, is also relevant. The Americans with Disabilities Act (ADA) doesn’t explicitly define commercial websites as “places of public accommodation” (because the ADA predates the Internet), but the courts have increasingly interpreted the law this way. Therefore, as “places of public accommodation,” company websites—and all associated content –must be accessible to people with disabilities. For more insight on this, try searching on “Netflix ADA Title III” or reading this article. (By the way, a web-focused update of the ADA is in the offing.) Last but not least, we come to Section 508 of the Rehabilitation Act, which spells out accessibility guidelines for businesses wanting to sell electronic and information technology (EIT) to the federal government. If your company doesn’t do that, then Section 508 doesn’t apply to you. What this means for businesses Not unreasonably, telecommunications companies must ensure that their products and services comply with accessibility regulations and are also usable by people with disabilities. This usability requirement means that telecom service providers must offer contracts, bills, and customer support communications in accessible formats. For product manufacturers, usability means providing customers with a full range of relevant learning resources in accessible formats: installation guides, user manuals, and product support communications. To comply with the legislation, telecommunications companies must find and implement cost-effective technology solutions that will allow them to deliver accessible customer-facing content. Organizations that fail to meet federal accessibility standards could leave themselves open to consumer complaints, lawsuits, and, possibly, stiff FCC fines. Meeting the document challenge with accessible PDF Telecommunications companies looking for ways to comply with federal regulations should consider a solution that can transform their existing document output of contracts, bills, manuals, and customer support communications into accessible PDF format. Why PDF? PDF is already the de facto electronic document standard for high-volume customer communications such as service contracts and monthly bills because it’s portable and provides an unchanging snapshot, a necessity for any kind of recordkeeping. But what about HTML? Why not use that? While HTML is ideal for delivering dynamic web and mobile content such as on-demand, customizable summaries of customer account data, it doesn’t produce discrete, time-locked documents. Plus, HTML doesn’t support archiving or portability, meaning HTML files are not “official” documents that can be stored and distributed as fixed entities. Document content is low-hanging fruit Document inaccessibility is not a problem that organizations need to live with because it can be solved immediately — and economically — with OpenText’s Automated Output Accessibility Solution, the only enterprise PDF accessibility solution on the market for high-volume, template-driven documents. This unique software solution enables telecommunications companies to quickly transform service contracts, monthly bills, product guides, and other electronic documents into WCAG 2.0 Level AA-compliant accessible PDFs. Whatever the data source, our performance numbers are measured in milliseconds so customers will receive their content right when they ask for it. OpenText has successfully deployed this solution at government agencies, as well as large commercial organizations, giving them the experience and expertise required to deliver accessible documents within a short time frame, with minimal disruption of day-to-day business. Fast, reliable, compliant, and affordable, our automated solution can help you serve customers and meet your compliance obligations. Learn more about the OpenText™Automated Output Accessibility solution.

Read More

Banking Technology Trends: Overcoming Process Hurdles

Financial analytics

Editor’s Note: This is the second half of a wide-ranging interview with OpenText Senior Industry Strategist Gerry Gibney on the state of the global financial services industry and its technical needs.  The interview has been edited for length and continuity. Unifying Information for Financial Reporting I heard a lot of discussion at the SIBOS 2016 conference in Geneva around financial reporting. Banks face procedural hurdles, especially if they’re doing merchant or commercial banking.  A lot of them still have manual processes. In terms of the procedures, the bigger the bank, the bigger the problem. That’s because the information is often in many places. For example, different groups from the bank may approach their corporate banking customers to buy or use a service or product – which is great, but they have to track and report on it. Often in the beginning, these separate group reporting processes are manual. Eventually, they’ want to automate the reporting and join the information to other data sources, but that’s the big challenge – it takes time to assemble and coordinate all the information streams and get them to work as an internal dashboard.  A similar challenge is creating portals to track financial liquidity. Another example is where clients ask for specific reports. The bank don’t want to say no, so they have to produce the reports manually, often as a rush job, and in a format that the client finds useful. The challenge is to take large amounts of data and summarize so you can give people what they ask for with the periodicity, the look, and the format that they want. Embedded Visualizations for our Customers’ Customers That’s where we come in. A lot of the value we offer with OpenText Analytics is embedding our analytic and visualization applications in a client’s own application so that they can offer dashboards, windows, reporting and so forth, to their own internal or external customers. The beauty of our embeddable business intelligence or analytics piece is that no one on the business side has to see it or work with it. It offers functionality that can be applied as needed, without having to make IT adjustments on your part or requiring people to enter data into bulky third-party programs. Tremendous capabilities are suddenly just there. Users can build a data map that automatically gathers and manages data, then organizes and reports it – in any format required, whether visual via charts and graphs, or numeric, if you prefer. Plus, it has powerful drill-down ability. Flexibility to Cope with Regulatory Shifts The other aspect of reporting is reporting to regulatory agencies. After the Great Recession and the banking crisis, governments worldwide have been stepping up their efforts in regulating the financial industry. Not just nations – local governments also. In fact, the fastest-growing department in every bank now is regulatory compliance. There are ever-increasing workloads, more workflow but without more people to deal with it. The problem for the U.S. government is that it presents a moving target. Dodd-Frank controls and the Volcker Rule, required banks to end proprietary trading. There is potentially a new level of risk from government changes in requirements and the need for banks to produce new reports, even sometimes on things that they weren’t aware they needed to report on. Banks and other financial institutions need a reporting solution that enables quick and easy production of whatever information government regulators are asking for.  An ideal reporting solution will maximize the flexibility in how you can look across both unstructured data and all the structured data in multiple silos. This is a good use case for ad hoc analytics and reporting – the power to create new types of reports whatever regulators may require. Financial Analytics: Understanding Your Customer Another analytics-related topic I heard at SIBOS was the need to understand customers better and how to identify a good target customer. This is top-of-mind for banks. I’m amazed that people gather streams of data in their CRM systems and then don’t use it. Often their CRM systems are stand-alone, not connected to anything. They might contain information that’s extremely valuable and could enhance their efforts. For example, sales efforts, proposals, and pitch books. They could tie these things together, and then analyze their findings to correlate sales resources to the results. With a unified analytics flow, you can drive business by managing client relationships, figuring out through advanced analytics who is the best candidate for up-selling or cross-selling, as well as identifying new customers. Finding new insights by searching all these CRM systems is a tremendous value that analytics, especially embeddable analytics from OpenText, can deliver. Analytics can have tremendous amount of value to business operations and make them more efficient, productive, and profitable. You can’t ask more than that. To learn more about how OpenText Analytics can help the financial services industry unlock the business value of existing data, consider our Webinar, “Extracting Value from Your Data with Embedded Analytics,” Wednesday, Dec. 14, at 11 a.m. Pacific Time/2 p.m. Eastern.  Click here for more information and to register.

Read More

Banking Trends from SIBOS: Technology Solutions to Tame Rampaging Workflows

banking trends from SIBOS

Editor’s Note: Gerry Gibney, Senior Industry Strategist at OpenText and resident expert on the financial services industry, was recently interviewed on the banking trends and technical needs he discovered at SIBOS (the annual trade show hosted by SWIFT, provider of global secure financial messaging services).  I always come back from SIBOS having learned new things, it’s one of the largest banking events in the world and this year, one of the big topics was domestic payments. Many people aren’t aware that for large banks, corporate internet banking payments represent around 24% of their revenue. They benefit from payment money while it is in their hands and they can charge fees for the payment services. It’s a big market because payments have to be made, whether regular payments such as rent and utilities on buildings or one-time money transfers. And they add up. For bigger banks, we’re talking several hundred million dollars each. Of course, they would prefer to keep that balance in their bank or extract it over time. I see a big role for OpenText here. Our BPM solution can be deployed to help with business networks, so banks can manage the workflow, the processes, and the controls. Managing the controls is important because with the SWIFT processes (payments and messaging), issues include: Who is authorized to send the money? Who else can do it? Who else can approve it? What if that person leaves? How do we add them into the system or remove them? Automating Banking Workflow Our own experience at OpenText is typical. Every year, our company  goes through the payment permissions updating process. What do we need to know? What do we need to get? How do we get it? Where do we apply it? How many accounts are responsive? Doing business in, say, Hong Kong, Shanghai, or Japan, we may have 10 or 20 people with different signatory levels, each needing to sign an eight page statement. Eight pages times 10 people, every year, for every account – that’s 80 pages per account every year, and that’s typical of many companies. A company might well have several hundred accounts with just one bank, and this has to be managed every year, with ever changing rules, like regulators now requiring the CFO’s home address for example. Another workflow example is client onboarding, which has to be done every time. Even if the customer has 200 accounts and they want to add number 201, you still have to go through the onboarding process. So all the information is out there in different places, who knows how well protected it all is? OpenText’s security capabilities, our ability to add workflow, control workflow, minimize, and automate it, adds a lot of value. OpenText is also a SWIFT service bureau. We help with payments reporting, via EDI and our Business Network, to enhance what banks do. We help banking in many areas, across all our solutions – for example, with analytics, on the content side for unstructured data, or helping with records management, which is strong on compliance. With embeddable analytics we can gather all sorts of information, whether it’s for bank employees internally or their clients and customers. This information can be transformed into reports, perform sophisticated analysis, and help companies find new ways to get revenue from it. It can also help to track things more efficiently, comply with government regulations more easily, and improve bottom line without increasing operating costs. In summary, it can be a tremendously powerful component of a bank’s overall offering. The second half of this interview will be published next week.

Read More

Data Quality is the Key to Business Success

data quality

In the age of transformation, all successful companies collect data, but one of the most expensive and difficult problems to solve is the quality of that information. Data analysis is useless if we don’t have reliable information, because the answers we derive from it could deviate greatly from reality. Consequently, we could make bad decisions. Most organizations believe the data they work with is reasonably good, but they recognize that poor-quality data poses a substantial risk to their bottom line. (The State of Enterprise Quality Data 2016 – 451 Research) Meanwhile, the idiosyncrasies of Big Data are only making the data quality problem more acute. Information is being generated at increasingly faster rates, while larger data volumes are innately harder to manage. Data quality challenges There are four main drivers of dirty data: Lack of knowledge. You may not know what certain data mean. For example, does the entry “2017” refer to a year, a price ($2,017.00), the number of widgets sold (2,017), or an arbitrary employee ID number? This could happen because the structure is too complex, especially in large transactional database systems, or the data source is unclear (particularly if that source is external). Variety of data. This is a problem when you’re trying to integrate incompatible types of information. The incompatibility can be as simple as one data source reporting weights in pounds and another in kilograms, or as complex as different database formats. Data transfers. Employee typing errors can be reduced through proofreading and better training. But a business model that relies on external customers or partners to enter their own data has greater risks of “dirty” data because it can’t control the quality of their inputs. System errors caused by server outages, malfunctions, duplicates, and so forth. Dealing with dirty data? Correcting a data quality problem is not easy. For one thing, it is complicated and expensive; benefits aren’t apparent in the short term, so it can be hard to justify to management. And as I mentioned above, the data gathering and interpretation process has many vulnerable places where error can creep in. Furthermore, both the business processes from which you’re gathering data and the technology you’re using are liable to change at short notice, so quality correction processes need to be flexible. Therefore, an organization that wants reliable data quality needs to build in multiple quality checkpoints: during collection, delivery, storage, integration, recovery, and during analysis or data mining. The trick is having a plan Monitoring so many potential checkpoints, each requiring a different approach, calls for a thorough quality assurance plan. A classic starting point is analyzing data quality when it first enters the system – often via manual input, or where the organization may not have standardized data input systems. The risk analyzed is that data entry can be erroneous, duplicated, or overly abbreviated (e.g. “NY” instead of “New York City.)”  In these cases, data quality experts’ guidance falls into two categories. First, you can act preventively on the process architecture, e.g. building integrity checkpoints, enforcing existing checkpoints better, limiting the range of data to be entered (for example, replacing free-form entries with drop-down menus), rewarding successful data entry, and eliminating hardware or software limitations (for example, if a CRM system can’t pull data straight from a sales revenue database). The other option is to strike retrospectively, focused on data cleaning or diagnostic tasks (error detection). Experts recommend these steps: Analyzing the accuracy of the data, through either making a full inventory of the current situation (trustworthy but potentially expensive) or examining work and audit samples (less expensive, but not 100% reliable). Measuring the consistency and correspondence between data elements; problems here can affect the overall truth of your business information. Quantifying systems errors in analysis that could damage data quality. Measuring the success of completed processes, from data collection through transformation to consumption One example might be how many “invalid” or “incomplete” alerts remain at the end of a pass through of the data. Your secret weapon: “Data provocateurs” None of this will help if you don’t have the whole organization involved in improving data quality. Thomas C. Redman, an authority in the field, presents a model for this in a Harvard Business Review article, “Data Quality Should Be Everyone’s Job.” Redman says it’s necessary to involve what he calls “data provocateurs,” people in different areas of the business (from top executives to new employees), who will challenge data quality and think outside the box for ways to improve it. Some companies are even proposing awards to employees who detect process flaws where poor data quality can sneak in. This not only cuts down on errors, it has the added benefit of promoting the idea through the whole company that clean, accurate data is important. Summing up Organizations are rightly concerned about data quality and its impact on their bottom line. The ones that take measures to improve their data quality are seeing higher profits and more efficient operations because their decisions are based on reliable data. They also see lower costs from fixing errors and spend less time gathering and processing their data. The journey towards better data quality requires involving all levels of the company. It also requires assuming costs whose benefits may not be visible in the short term, but which eventually will end up boosting these companies’ profits and competitiveness.

Read More

Attention All Airlines: Is Your Inaccessible Document Technology Turning Away Customers?

accessible PDF

Imagine you’re an airline executive and a small but significant percentage of your customers—let’s say 10% or less—download flight itineraries and boarding passes from your website only to find that the information in these documents was jumbled up and, in some cases, missing altogether. What would you do? Would you be concerned enough to take action? Would it matter if these customers didn’t know their flight number, boarding gate, and seat assignment? After all, 90% or more of your customers would still be receiving this information as usual. Before venturing an answer to these hypothetical questions, let’s pause for a quick look at your industry. Over the last 60 years, airline profit margins have averaged less than 1%, though the situation has been improving in recent years. The International Air Transport Association (IATA) reported a net profit margin of 1.8% in 2014 and 4.9% in 2015; industry profits are expected to be 5.6% in 2016. With such narrow margins, it’s clear that airlines need every customer they can get, and the industry has little tolerance for inefficiencies. Now back to your document problem… Even if less than 10% of customers were affected, it seems likely that you’d take steps to fix the problem and also pull out the stops to get it done as fast as possible, before the company loses many customers. Of course, the underlying assumption here is that a proven, economically feasible IT solution is available. This might be happening at your airline—for real All hypotheticals aside, a scenario like this could actually be playing out at your company right now. Consider: According to the 2014 National Health Interview Survey, 22.5 million adult Americans—nearly 10% of adult Americans—reported being blind or having some sort of visual impairment. To access online flight booking tools, along with electronic documents such as itineraries and boarding passes, many of these people need to use screen reader programs that convert text into audio. If, however, the documents aren’t saved in a format like accessible PDF (with a heading structure, defined reading order, etc.), they’re likely to come out garbled or incomplete in a screen reader. Of course, visually impaired customers could book their flights by phone and opt to receive Braille or Large Print documents in the mail (expensive for your airline). Then again, theoretically, all of your other customers could book by phone, too. The point is you don’t really want customers booking by phone because your self-serve website is less costly to operate than customer call centers; electronic documents are cheaper than paper and postage, and much cheaper than Braille and Large Print. So, wouldn’t it be nice if there was an affordable technology solution that you could plug in to serve up the documents that all of your customers—that’s 90% plus 10%—need to fly with your airline? Or course, it would be even better if the solution met the requirements of new Department of Transportation (DOT) rules implementing the Air Carrier Accessibility Act (ACAA), which have a compliance deadline of December 12, 2016. Customer satisfaction and regulatory compliance? Now that would be good. OpenText Automated Output Accessibility Solution OpenText has the only enterprise PDF accessibility solution for high-volume, template-driven documents. This unique software solution can dynamically transform electronic documents such as e-ticket itineraries/receipts and boarding passes into accessible PDFs that comply with the DOT’s new ACAA rules. Designed to be inserted between the back office (e.g., a passenger reservation system) and a customer-facing Web portal, OpenText™ Automated Output Accessibility Solution has minimal impact on existing IT infrastructure. Even better, the solution generates WCAG 2.0 Level AA compliant output that has been tested and validated by prominent organizations and advocacy groups for visually impaired people. OpenText has successfully deployed this solution at government agencies, as well as large commercial organizations, giving them the experience and expertise required to deliver accessible documents within a short time frame, with minimal disruption of day-to-day business. As the de facto electronic document standard for high-volume customer communications, PDF format offers both portability and an unchanging snapshot of information, necessities for a document of record. Contact us to discuss more about how we can help you deliver accessible, ACAA-compliant PDF documents to your customers. Remember, the DOT’s deadline is December 12, 2016.

Read More