Analytics

Find More Knowledge in Your Information at Enterprise World 2017

If your office is like most, it’s got millions of gigabytes full of information stashed away on computer hard drives – and maybe even file cabinets full of paper! Every single business process generates enormous data streams – not just your ERP and CRM systems, but payroll, hiring, even ordering lunch from the caterer for those regular Thursday meetings. So wouldn’t you like to find out how you can leverage the knowledge already contained in all that information? And derive more value from your existing systems of record? Come to OpenText Enterprise World this July and you’ll hear how organizations in every industry are using the cutting-edge techniques of OpenText™ Analytics to derive more value from their data – including self-service access, prediction, and modeling, and innovative techniques to get insights more easily out of unstructured data (aka the stuff you use most of the time: documents, messages, and social media). We are excited to showcase OpenText Magellan at this year’s conference and  show you the impact it will have in helping analyze massive pools of data and harness the power of your information. We’ll also preview the roadmap of new developments in the OpenText Analytics Suite. Helping Our Human Brains Navigate Big Data Thanks to cheap and abundant technology, we have so much data at our disposal – creating up to 2.5 exabytes a day by some estimates – that the sheer amount is overwhelming. In fact, it’s more than our human brains can make sense of.  “It’s difficult to make decisions, because that much data is more than we can make sense of, cognitively,” says Lalith Subramanian, VP of Engineering for Analytics at OpenText. “That’s where machine learning and smart analytics come into the picture,” he explains. “We intend to do for Big Data what earlier reporting software companies tried to do for business intelligence – simplify it and make it less daunting, so that reasonably competent people can do powerful things with Big Data.” Expect plenty of demos and use cases, including a look at our predictions from last year’s Enterprise World about who would die on Season 6 of “Game of Thrones,” and new prognostications for Season 7. Do-It-Yourself Analytics Provisioning Meanwhile, OpenText also plans to unveil enhancements to the Analytics Suite that will help give users even more power to blend and explore their own data. OpenText™ iHub , our enterprise-grade deployment server for interactive analytics at the core of the Analytics Suite, is adding the ability to let non-technical users provision their own data for analysis, rather than relying on IT, Subramanian says. They can freely blend and visualize data from multiple sources. These sources will soon include not just structured data, such as spreadsheets and prepared database files or ERP records, but unstructured data including text documents, web content, and social media streams. That’s because new algorithms to digest and make sense of language and text are getting infused into both OpenText Analytics and OpenText™ InfoFusion, an important component in the content analytics process. With OpenText™ Big Data Analytics, users will be able to apply these new, customized algorithms to the self-provisioned data of many types. At the same time, InfoFusion is adding adapters to pull content off Twitter feeds and web sites automatically. The Word on the Street One use case for this combination of OpenText InfoFusion and the Analytics Suite is to research topics live, as they’re being discussed online, Subramanian adds. “You could set it up so that it goes out as often as desired to see the latest things related to whatever person or topic you’re interested in. Let’s say OpenText Corporation – then it’ll go look for news coverage about OpenText plus the press releases we publish, plus Tweets by and about us, all aggregated together, then analyzed by source, sub-topic, emotional tone (positive, negative, or neutral), as we’ve demonstrated with our content analytics-based Election Tracker. Over time we’d add more and more (external information) sources.” Keep in mind, politicians, pundits, and merchants have been listening to “the word on the street” for generations. But that used to require armies of interns to go through all the mail, voice messages, conversations, or Letters to the Editor – and the net result was score-keeping (“yea” vs. “nay” opinions) or subjective impressions. Now these opinions, like every other aspect of the digital economy, can be recorded and analyzed by software that’s objective and tireless. And they can add up to insights that enrich your business intelligence for better decision-making. To see and hear all of this in person, don’t miss Enterprise World in Toronto, July 10-13. Click here for more information and to register.

Read More

What a Difference a Day Makes: Get up to Speed on OpenText Analytics in 7 Hours

Analytics Workshop

One of the biggest divides in the work world these days is between people with software skills and “business users” – the ones who can work their magic on data and make it tell stories, and… well, everyone else (those folks who often have to go hat in hand to IT, or their department’s digital guru, and ask them to crunch the numbers or build them a report). But that divide is eroding with help from OpenText™ Analytics. With just a few hours’ training, you can go from absolute beginner to creating sophisticated data visualizations and interactive reports that reveal new insights in your data. And if you’re within travel distance of Washington, D.C., have we got an offer for you! Join OpenText Analytics Wednesday, May 10, at The Ritz-Carlton, Arlington, VA for a free one-day interactive, hands-on analytics workshop that dives deep into our enterprise-class tools for designing, deploying, and displaying visually appealing information applications. During this workshop, you’ll gain insights from our technical experts Dan Melcher and Geff Vitale. You’ll learn how OpenText Analytics can provide valuable insights into customers, processes, and operations, improving how you engage and do business. We recently added a bonus session in the afternoon, on embedding secure analytics into your own applications. Here, you’ll see why many companies use OpenText™ iHub to deliver embedded analytics, either to customers (e.g. through a bank’s portal) or as an OEM app vendor, embedding our enterprise-grade analytics on a white-label basis to speed up the development process. Here’s what to expect in each segment: Learning the Basics of OpenText Analytics Suite Get introduced to the functions and use cases of OpenText Analytics Suite, including basic data visualizations and embedded analytics. Start creating your own interactive reports and consider what this ability could do for your own business. Analyze the Customer You’ll learn about the advanced and predictive analysis features of the Analytics Suite by doing a walk-through of a customer analysis scenario. Begin segmenting customer demographics, discovering cross-sell opportunities, and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. Drive Engagement with Dashboards A self-service scenario where you create and share dashboards completely from scratch will introduce the dashboarding and reporting features of OpenText Analytics. See how easy it is to assemble interactive data visualizations that allow users to filter, pivot, explore, and display the information any way they wish. Embed Secure Analytics with iHub After the lunch break, learn how to enable secure analytics in your application, whether as a SaaS or on-premise deployment. OpenText answers the challenge with uncompromising extensibility, scalability, and reliability. Who should attend? IT directors and managers, information technology managers, business analysts, product managers and architects Team members who define, design, and deploy applications that use data visualizations, reports, dashboards, and analytics to engage their audience Consultants who help clients evaluate and implement the right technology to deliver data visualizations, reports, dashboards and analytics at scale If you are modernizing your business with Big Data and want your entire organization to benefit from compelling data visualizations, interactive reports and dashboards – then don’t miss this free, hands-on workshop! For more details or to sign up, click here. And if you’d really like to dive into the many facets of OpenText Analytics, along with Magellan, our next-generation cognitive platform, and the wide world of Enterprise Information Management, don’t miss Enterprise World, July 10-13 in Toronto.  For more information, click here.

Read More

Enterprise World: Analytics Workshop Takes You From Zero to Power User in 3 Hours

Analytics Workshop

One of the great things about OpenText™ Analytics Suite is its ease of use. In less than three hours, you can go from being an absolute beginner to creating dynamic, interactive, visually appealing reports and dashboards. That’s even enough time to become a “citizen data scientist,” using the advanced functionalities of our Analytics Suite to perform sophisticated market segmentation and make predictions of likely outcomes and customer behavior. So by popular demand, we’re bringing back our Hands-On Analytics Workshop at Enterprise World 2017, July 10-13 in Toronto. The workshop comprises three 50-minute sessions on Tuesday afternoon, July 11. Just bring your laptop, connect to our server, and get started with a personalized learning experience. You can attend the sessions individually – but for the full experience, you’ll want to attend all three. Learn how businesses and nonprofits use OpenText Analytics to better engage customers, improve process and modernize their operations by providing self-service analytics to a wide range of users across a variety of use cases. This three-part workshop is also valuable for users of OpenText™ Process Suite, Experience Suite, Content Suite, and Business Network. Here’s what to expect in each segment: 1. ANA-200: Learning the Basics of OpenText Analytics Suite This demo-packed session serves as an introduction to the series, and will arm you with all you need to know about the OpenText Analytics Suite, including use cases, benefits and customer successes, as well as a deep dive into product features and functionality. Through a series of sample application demonstrations, you will learn how OpenText Analytics can meet any analysis requirement or use case, including yours! This session serves as a perfect lead-in for the next 2 sessions: ANA-201 and ANA-202. 2. ANA-201 Hands-On Workshop: Using Customer Analytics to Improve Engagement This hands-on session will introduce the advanced and predictive analysis features of the Analytics Suite by walking you through a customer analysis scenario using live product. Connect from your own laptop to our server and begin segmenting customer demographics, discovering cross-sell opportunities and predicting customer behavior, all in minutes – no expertise needed in data science or statistics. You will learn how OpenText Analytics can provide valuable insights into customers, processes and operations, improving how you engage and do business. 3. ANA-202 Hands-On Workshop: Working with Dashboards to Empower Your Business Users This hands-on session will introduce the dashboarding and reporting features of OpenText Analytics by walking you through a self-service scenario where you create and share dashboards completely from scratch. Connect from your laptop to our server and see just how easy it is to assemble interactive data visualizations that allow users to filter and pivot the information any way they wish, in just a matter of minutes! You will learn how OpenText makes it easy for any user to analyze and share information, regardless of their technical skill. Of course, we have plenty of other interesting sessions about OpenText Analytics planned for Enterprise World. Get a sneak peek at product road maps, exciting new features (including developments in Magellan, our cognitive software platform), and innovative customer use cases for the OpenText Analytics Suite. Plus, get tips from experts, immerse yourself in technical details, and network with peers, and enjoy great entertainment. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

Knorr-Bremse Keeps the Wheels Rolling with Predictive Maintenance Powered by OpenText Analytics

diagnosis

Trains carry billions of passengers and tons of freight a year worldwide, so making sure their brakes work properly is no mere routine maintenance check. Helping rail transport operate more safely and efficiently is top-of-mind for the Knorr-Bremse Group, based in Munich, Germany. The company is a leading manufacturer of brakes and other components of trains, metro cars, and buses. These components include sophisticated programming to optimize operations and diagnosis. The company developed iCOM, an Internet of Things-based platform for automated maintenance and diagnosis.  Through onboard sensors, iCOM (Intelligent Condition Oriented Maintenance) gathers data wirelessly from more than 30 systems throughout a train car, including brakes, doors, wipers, heating and ventilation.  These IoT sensors continually report back conditions such as temperature, pressure, energy generation, duration of use, and error conditions. iCOM analyzes the data to recommend condition-based, rather than static, scheduled maintenance. This means any performance issue can be identified before it becomes a serious safety problem or a more costly repair or replacement. For iCOM customers, this means better safety, more uptime, improved energy efficiency,  and lower operating costs for their rail fleets.   As more customers adopted the solution, they began demanding more sophisticated analysis (to see when, where, and even why an event happens), more visually engaging displays, and the ability to build their own reports without relying on IT. Knorr-Bremse knew it needed to upgrade the technology it was using for analysis and reporting on the vast quantities of data that the iCOM solution gathers, replacing open-source BIRT (Business Intelligence and Reporting Tools). A new analytics platform would also have to be scalable enough to cope with the enormous volumes of real-time data that thousands of sensors across a rail fleet continually generate. Further, Knorr-Bremse needed an analytics solution it could develop, embed into the overall iCOM platform, and bring to market with the least possible time and coding effort. The answer to these challenges  was OpenText™ Analytics Suite. “Due  to the easy-to-use interface of OpenText Analytics, our develop­ers were quickly productive in developing the analytics and report­ing aspects of iCOM. iCOM is based on Java and consequently it has been very easy to integrate and embed the OpenText Analytics platform [into it]. It is not just about shortening the time to develop, though. The results have to look good  and with OpenText, they do,” says Martin Steffens, the iCOM digital platform project manager and software architect at Knorr-Bremse. To learn more about Knorr-Bremse’s success with OpenText Analytics, including a potential drop of up to 20 percent in maintenance costs, click here.

Read More

Discovery Rises at Enterprise World

This summer will mark a full year since Recommind became OpenText Discovery, and we’re preparing to ring in that anniversary at our biggest conference yet: Enterprise World 2017! We’re inviting all of our clients, partners, and industry peers to join us for three days of engaging roundtables, interactive product demos, Q&A with experts, a keynote from none other than Wayne Gretzky, and—of course—the latest updates, roadmaps, and visions from OpenText leaders. Here’s a sneak peek of what to expect from OpenText Discovery’s track: The Future of Enterprise Discovery. We’ll be talking at a strategic and product-roadmap level about unifying Enterprise Information Management (EIM) with eDiscovery. New data source connectors, earlier use of analytics, and even more flexible machine learning applications are on the way! Introduction to eDiscovery. Our vision for the future of eDiscovery is broader than the legal department, and we’re spreading that message with sessions tailored for IT and data security professionals that want to know more about the legal discovery process and data analysis techniques. Why Legal is Leading the Way on AI. Our machine learning technology was the first to receive judicial approval for legal document review, and in the years since, we’ve continued to innovate, develop, and expand machine learning techniques and workflows. In our sessions, we’ll highlight current and future use cases for AI for investigations, compliance, due diligence, and more. Contract Analysis and Search. We’ll also have sessions focused exclusively on innovations in enterprise search and financial contract analysis. Join experts to learn about the future of predictive research technology and the latest data models for derivative trading optimization and compliance. Our lineup of sessions is well underway and we’ve got an exciting roster of corporate, academic, government, and law firm experts including a special keynote speaker on the evolving prominence of technology in law. Register here for EW 2017  with promo code EW17TOR for 40% off and we’ll see you in Toronto!

Read More

From KPIs to Smart Slackbots, Hot New Analytics Developments at OpenText Enterprise World 2017

Innovation never sleeps in the OpenText Analytics group, where we’re working hard to put together great presentations for Enterprise World 2017, July 10-13 in Toronto. We offer a sneak peek at product road maps, exciting new features and innovative customer use cases for the OpenText Analytics Suite. Plus, you can get hands-on experience building custom-tailored apps, get tips from experts, immerse yourself in technical details, and network with peers. Learn about: Reporting and dashboards with appealing, easy-to-create visual interfaces Self-service analytics to empower your internal users and customers and help you make better decisions Best-of-breed tools to crunch massive Big Data sets and derive insights you never could have before Cognitive computing and machine learning Capturing the Voice of the Customer Structured and unstructured content analytics that can unlock the hidden value in your documents, chats, and social media feeds. Our presentations include: Industry-focused sessions including OpenText Analytics for Financial Services. Hear how we add value in common use cases within the financial industry, including customer analytics, online consumer banking, and corporate treasury services. Showcases of hot new functions like Creating Intelligent Analytic Bots for Slack (the popular online collaboration tool). Personalized training in OpenText Analytics. Our three-part Hands-On Analytics Workshop can get you from an absolute beginner to competent user, harnessing the power of Big Data for better insights and build compelling data visualizations and interactive reports and dashboards. Technical deep dives with popular tools such as Business Performance Management Analytics. We’ll show you how to use OpenText Analytics to measure KPIs and performance-driven objectives, including the popular Balanced Scorecard methodology. A fascinating use case: Financial Contract Analysis with Perceptiv. See how customers are using our advanced analytics tool to capture, organize, and extract relevance from over 200 fields in half a million financial derivative contracts. How Many Lawyers Does It Take to Analyze an Email Server? Learn how lawyers and investigators are using our cutting-edge OpenText Discovery technology, including email  mapping, concept-based search, and machine learning, to find the “smoking guns” in thousands of pages of email. Click here for more details about attending Enterprise World. See you in Toronto!

Read More

For Usable Insights, You Need Both Information and the Right Analytical Engine

Data

“It’s all about the information!” Chances are you’ve heard this before. If you are a Ben Kingsley or Robert Redford fan you may recognize the line from Sneakers (released in 1992). Yes, 1992. Before the World Wide Web!  (Remember, Netscape didn’t launch the first commercially successful Web browser until 1993). Actually it’s always been about the information, or at least the right information – what’s needed to make an informed decision, not just an intuitive one. In many ways the information, the data, has always been there; it’s just that until recently, it wasn’t readily accessible in a timely manner. Today we may not realize how much data is available to us through technology, like the mobile device in your pocket – at 12GB an iPhone 6S is 2,000 times bigger than the 6MB programs IBM developed to monitor the Apollo spacecrafts’ environmental data. (Which demonstrates the reality of Moore’s Law, but that’s another story).  Yet because it’s so easy to create and store large amounts of data today, far too often we’re drowning in data and experiencing information overload. Drowning in Data Chances are you’re reading this in between deleting that last email, before your next Tweet, because the conference call you are on has someone repeating the information you provided yesterday. Bernard Marr, a contributor to Forbes, notes “that more data has been created in the past two years than in the entire previous history of the human race”.  Marr’s piece has at least 19 other eye-opening facts about how much data is becoming available to us, but the one that struck me the most was this one: 0.5%! Imagine the opportunities missed. Just within the financial industry, the possibilities are limitless. For example, what if the transaction patterns of a customer indicated they were buying more and more auto parts as well as making more payments to their local garage (or mechanic). Combined with a recent increase in automatic payroll deposits, might that indicate this customer would be a good prospect for a 0.9% new car financing offer? Or imagine the crises which could be avoided. Think back to February 2016 and the Bangladesh Bank heist where thieves managed to arrange the transfer of $81 million to the Rizal Commercial Banking Corporation in the Philippines. While it’s reasonable to expect existing controls might have detected the theft, it turns out that a “printer error” alerted bank staff in time to forestall an even larger theft, up to $1 billion. The SWIFT interface at the bank is configured to print out a record each time a funds transfer is executed, but on the morning of February 5 the print tray was empty. It took until the next day to get the printer restarted. The New York Federal Reserve Bank had sent queries to the Bank questioning the transfer. What alerted them? A typo. Funds to be sent to the Shalika Foundation were addressed to the “Shalika fandation.” The full implications of this are covered in WIRED Magazine. Analytics, Spotting Problems Before They Become Problems Consider the difference if the bank had the toolset able to flag the anomaly of a misspelled beneficiary in time to generate alerts and hold up the transfers for additional verification. The system was programmed to generate alerts as print-outs. It’s only a small step to have alerts like this sent as an SMS text, or email to the bank’s compliance team, which may have attracted notice sooner. To best extract value from the business data available to you requires two things: An engine and a network. The engine should be like the one in OpenText™ Analytics, designed to perform the data-driven analysis needed. With the OpenText™ Analytics Suite, financial institutions can not only derive data-driven insights to offer value-added solutions to clients, they can also better manage the risk of fraudulent payment instructions, based on insights derived from a client’s payment behavior. For example, with the Bangladesh Bank, analytics might have flagged some of the fraudulent transfers, to Rizal Bank in the Philippines,by correlating the fact that the Rizal accounts were only opened in May 2015, contained only $500 each, and had not been previous beneficiaries. Business Network: Delivering Data to Analytical Engines But the other equally important tool is the network. As trains need tracks, an analytical tools engine needs data (as well as the network to deliver it).   Today more and more of this data needed to extract value comes from outside the enterprise. The Open Text™ Business Network is one way thousands of organizations exchange the data needed to manage their business, and provide the fuel for their analytical engines. For example, suppose a bank wanted to offer their customers the ability to generate ad-hoc reporting through their banking portal. With payment, collection, and reporting data flows delivered through the Open Text Business Network Managed Services, the underlying data would be available for the bank’s analytical engine. Obviously much of the data involved in the examples I’ve provided would be sensitive, confidential, and in need of robust information security controls to keep it safe. That will be the subject of my next post.

Read More

Steel Mill Gains Insight, Makes Better Decisions Through Analytics

analytics

When you think of a steel mill, crucibles of glowing molten metal, giant molds and rollers probably come to mind, not complex financial analysis. But like every other industry nowadays, steel mills – especially ones that specialize in scrap metal recycling – have to keep reviewing their material and production costs and the ever-changing demand for their products, so that they can perform efficiently in a competitive global market. That was the case for North Star BlueScope Steel in Delta, Ohio, which produces hot-rolled steel coils, mostly for the automotive and construction industries. Founded in 1997, the company is the largest scrap steel recycler in Ohio, processing nearly 1.5 million tons of metal a year. To operate profitably, North Star BlueScope examines and analyzes its costs and workflow every month, pulling in data from all over the company, plus external market research. But it was hampered by slow and inefficient technology, centered on Microsoft Excel spreadsheets so large and unwieldy, they took up to 10 minutes just to open. Comparing costs for, say, the period of January through May required North Star staffers to open five separate spreadsheets (one for each month) and combine the information manually. Luckily, the company was already using OpenText™ iHub  as a business intelligence platform for its ERP and asset management systems. It quickly realized iHub would be a much more efficient solution for its monthly costing analysis than the Excel-based manual process. Making Insights Actionable In fact, North Star BlueScope Steel ended up adopting the entire OpenText™ Analytics Suite, including OpenText™ Big Data Analytics (BDA),  whose advanced approach to business intelligence lets it easily access, blend, explore, and analyze data. The results were impressive. The steel company can now analyze a much larger range of its data and get better insights to steer decision-making. For example, it can draw on up to five years’ worth of data in a single, big-picture report, or drill down to a cost-per-minute understanding of mill operations. Now it has a better idea of the grades and mixes of steel products most likely to generate higher profits, and the customers most likely to buy those products. To learn more about how North Star BlueScope Steel is using OpenText Analytics to optimize its operations, plus its plans to embrace the Internet of Things by plugging data streams from its instruments about electricity consumption, material usage, steel prices, and even weather directly into Big Data Analytics, click here.

Read More

Unlock Unstructured Data and Maximize Success in Your Supply Chain

By any standard, a successful business is one that can find new customers, discover new markets, and pursue new revenue streams. But today, succeeding via digital channels, delivering an excellent customer experience, and embracing the digital transformation is the true benchmark. Going digital can increase your agility, and with analytics you can get the level of insight you need to make better decisions. Advances in analytics and content management software are giving companies more power to cross-examine unstructured content, rather than leaving them to rely on intuition and gut instinct. Now, you can quickly identify patterns and offer a new level of visibility into business operations. Look inside your organization to find the value locked within the information you have today. The unstructured data being generated every day inside and outside your business holds targeted, specific intelligence that is unique to your organization and can be used to find the keys to current and future business drivers. Unstructured data like emails, voicemails, written documents, presentations, social media feeds, surveys, legal depositions, web pages, videos, and more offer a rich mine of information that can inform how you do business. Unstructured content, on its own, or paired with structured data, can be put to work to refine your strategy. Predictive and prescriptive analytics offer unprecedented benefits in the digital world. Consider, for instance, the data collected from a bank’s web chat service. Customer service managers cannot read through millions of lines of free text, but ignoring this wealth of information is not an option either. Sophisticated data analytics allow banks to spot and understand trends, like common product complaints or frequently asked questions. They can see what customers are requesting to identify new product categories or business opportunities. Every exchange, every interaction, and all of your content holds opportunity that you can maximize. Making the most of relevant information is a core principle of modern enterprise information management. This includes analyzing unstructured information that is outside the organization, or passed between the company and trading partners across a supply chain or business network. As more companies use business networks, there is an increase in the types and amounts of information flowing across them; things like orders, invoices, delivery information, partner performance metrics, and more. Imagine the value of understanding the detail behind all that data? Imagine the insight it can provide to future planning? And even better: if you could analyze it fast enough to make a difference in what you do today. Here are two common, yet challenging, scenarios and their solutions. Solving challenges in your enterprise Challenges within the business network – A business network was falling behind in serving its customers. They needed to increase speed and efficiency within their supply chain to provide customers with deeper business process support and rich analytics across their entire trading partner ecosystem. With data analytics, the company learned more from their unstructured data—emails and documents—and was able to gain clearer insights into transactions flowing across the network. The new system allows them to identify issues and exceptions earlier, take corrective action, and avoid problems before they occur. Loss of enterprise visibility – A retail organization was having difficulty supporting automatic machine-to-machine data feeds coming from a large number of connected devices within their business network. With the addition of data analytics across unstructured data sources, they gained extensive visibility into the information flowing across their supply chain. Implementing advanced data analytics allowed them to analyze information coming from all connected devices, which afforded a much deeper view into data trends. This intelligence allowed the retailer to streamline their supply chain processes even further. Want to learn more? Explore how you can move forward with your digital transformation; take a look at how OpenText Release 16 enables companies to manage the flow of information in the digital enterprise, from engagement to insight.

Read More

Westpac Bank Automates and Speeds Up Regulatory Reporting with OpenText Analytics

Westpac

When Westpac Banking Corporation was founded in 1817 in a small waterfront settlement in Australia, banking was rudimentary. Records were kept with quill pens in leather-bound ledgers: Pounds, shillings, and pence into the cashbox; pounds, shillings, and pence out.  (Until a cashier ran off with half the fledgling bank’s capital in 1821, that is.) Now, exactly 200 years after Westpac’s parent company opened its doors, it’s not only the oldest bank in Australia but the second-largest, with 13 million customers worldwide and over A$812 billion under management. Every year it does more and more business in China, Hong Kong, and other Asia-Pacific nations. The downside to this expansion is: More forms to fill out – managing the electronic and physical flow of cash across national borders is highly regulated, requiring prompt and detailed reports of transactions, delivered in different formats for each country and agency that oversees various aspects of Westpac’s business. These reports require information from multiple sources throughout the company. Until recently, pulling out and consolidating all these complex pieces of data was a manual, slow, labor-intensive process that often generated data errors, according to Craig Chu, Westpac’s CIO for Asia.  The bank knew there had to be a better way to meet its regulatory requirements – but one that wouldn’t create its own new IT burden. A successful proof of concept led to Westpac adopting an information management and reporting solution from OpenText™ Analytics. To hear Chu explain how Westpac streamlined and automated its reporting process with OpenText™ iHub and Big Data Analytics, and all the benefits his company has realized, check out this short video showcasing this success story.  (Spoiler alert: Self-service information access empowers customers and employees.) If you’d like to learn more about what the OpenText Analytics Suite could do for your organization, click here.

Read More

Post-Election Score: Pundits 0, Election Tracker 1

election tracker

In the midst of post-election second-guessing over why so many polls and pundits failed to predict Donald Trump’s win, there was one clear success story: OpenText™ Election Tracker. ElectionTracker, the web app that analyzed news coverage of the Presidential race from over 200 media outlets worldwide for topics and sentiment, was a great showcase for the speed, robustness, and scalability of the OpenText™ Information Hub (iHub) technical platform it was built on. With demands for more than 54,000 graphic visualizations an hour on Election Day, it ramped up quickly with no downtime, performance you’d expect from OpenText™ Analytics. Moreover, the tracker’s value in revealing patterns in the tone and extent of campaign news content provided valuable extra insight into voter concerns that pre-election polls didn’t uncover, and that insight didn’t just end after Election Day. It’s just one in the series of proofs-of-concept on how our unstructured data analytics solutions shine at analyzing text and other unstructured data. They bring to the surface previously hard-to-see patterns in any kind of content stream – social media, customer comments, healthcare service ratings, and much more. OpenText Analytics solutions analyze these patterns and bring them to life in attractive, easy-to-understand, interactive visualizations. Also if some unforeseen event ends up generating millions of unexpected clicks, Tweets, or comments that you need to sift through quickly, iHub offers the power and reliability to handle billions of data points on the fly. Hello, Surprise Visitors! Speaking of unforeseen events: Some of the Election Tracker traffic was due to mistaken identity. On Election Day, so many people were searching online for sites with live tracking of state-by-state election results that electiontracker.us became one of the top results on Google that day. At peak demand, the site was getting nearly 8,000 hits an hour, more than 100 times the usual traffic. Senior Director of Technical Marketing Mark Gamble, an Election Tracker evangelist, was the site administrator that day. “On November 8 at around 6 a.m. I was about to get on a flight when I started getting e-mail alerts from our cloud platform provider that the Election Tracker infrastructure was getting hammered from all those Google searches. I’d resolve that alert, and another one would pop up.” “We had it running at just two nodes of our four-node cluster, to keep day-to-day operating costs down. Our technical team said, ‘Let’s spin up the other two nodes.’  That worked while I was changing planes in Detroit. But when I got off, my phone lit up again: Demand was still climbing. It was just unprecedented traffic.” “So we had our cloud provider double the number of cores, or CPUs, that run on each node. And that kept up with demand. The site took a bit longer to load, but it never once crashed. That’s the advantage of running in the cloud – you can turn up the volume on the fly.” “Of course, the flexibility of our iHub-based platform is unique. All the cloud resources in the world won’t help you if you can’t quickly and efficiently take advantage of them.” Easy Visualizing Demand on the site was heightened by the Election Tracker’s live, interactive interface. That’s intentional, because OpenText Analytics solutions encourage users to take a self-service approach to exploring their data. “It’s not just a series of static pages,” explains Clement Wong, Director of Analytics On-Demand Operations. “The infographics are live and change as the viewer adjusts the parameters.  With each page hit, a visitor was asking for an average of seven visualizations. That means the interface is constantly issuing additional calls back and forth  to the database and the analytic engine. iHub has the robustness to support that.” (In fact, at peak demand the Tracker was creating more than 15 new visualizations every second.)” “Some of the reporters who wrote about Election Tracker told us how much they enjoyed being able to go in and do comparisons on their own,” Gamble says. “For example, look at how much coverage each candidate got over the past 90 days, compared to the last 7 days, then filter for only non-U.S. news sources, or drill down to specific topics like healthcare or foreign policy. That way they didn’t have to look at static figures and then contact us to interpret for them; the application granted the autonomy to draw their own conclusions.” Great Fit for Embedding “The self-service aspect is one reason that iHub and other OpenText Analytics solutions are a great fit for embedding into other web sites (use cases such as bank statements or utility usage)”, Gamble adds. “First of all, an effective embedded analytic application has to be highly interactive and informative, so people want to use it – not just look at ready-made pages, but feel comfortable exploring on their own.” “Embedded analytics also requires seamless integration with the underlying data sources so the visuals are integral and indistinguishable from the rest of the site, and it needs high scalability to keep up with growing usage.” What’s Next? The iHub/InfoFusion integration underlying the Election Tracker is already being used in other proofs-of-concept. One is helping consumer goods manufacturers analyze customers’ social media streams for their sentiments about the product and needs or concerns. “If you think of Election Tracker as the Voice of the Media, the logical next step is Voice of the Customer,” Gamble says. The Election Tracker is headlining the OpenText Innovation Tour, which just wrapped up in Asia and resumes in spring 2017.

Read More

Telco Accessibility 101: What’s Now Covered by U.S. Legislation

telco accessibility

In a word, everything. Name a telecommunications product or service and chances are it has a legal requirement to comply with federal accessibility laws. Let’s see… Mobile connectivity services for smartphones, tablets, and computers? Check Smartphones, tablets, and computers? Check Internet services (e.g., cable, satellite)? Check Television services (e.g., cable, satellite, broadcast)? Check Televisions, radios, DVD/Blu-ray players, DVRs, and on-demand video devices? Check Email, texting, and other text-based communication? Check VoIP communications and online video conferencing? Check Fixed-line phone services? Check Fixed-line telephones, modems, answering machines, and fax machines? Check Two tin cans attached by a string? Check All of these products and services are covered by U.S. accessibility legislation (except the cans and string). What laws are we talking about here? Mainly Section 255 of the Telecommunications Act of 1996, for products and services that existed before 1996, and the Twenty-­First Century Communications and Video Accessibility Act (CVAA) of 2010, which picked up where Section 255 left off, defining accessibility regulations for broadband-enabled advanced communications services. Web accessibility legislation, while not telco-specific, is also relevant. The Americans with Disabilities Act (ADA) doesn’t explicitly define commercial websites as “places of public accommodation” (because the ADA predates the Internet), but the courts have increasingly interpreted the law this way. Therefore, as “places of public accommodation,” company websites—and all associated content –must be accessible to people with disabilities. For more insight on this, try searching on “Netflix ADA Title III” or reading this article. (By the way, a web-focused update of the ADA is in the offing.) Last but not least, we come to Section 508 of the Rehabilitation Act, which spells out accessibility guidelines for businesses wanting to sell electronic and information technology (EIT) to the federal government. If your company doesn’t do that, then Section 508 doesn’t apply to you. What this means for businesses Not unreasonably, telecommunications companies must ensure that their products and services comply with accessibility regulations and are also usable by people with disabilities. This usability requirement means that telecom service providers must offer contracts, bills, and customer support communications in accessible formats. For product manufacturers, usability means providing customers with a full range of relevant learning resources in accessible formats: installation guides, user manuals, and product support communications. To comply with the legislation, telecommunications companies must find and implement cost-effective technology solutions that will allow them to deliver accessible customer-facing content. Organizations that fail to meet federal accessibility standards could leave themselves open to consumer complaints, lawsuits, and, possibly, stiff FCC fines. Meeting the document challenge with accessible PDF Telecommunications companies looking for ways to comply with federal regulations should consider a solution that can transform their existing document output of contracts, bills, manuals, and customer support communications into accessible PDF format. Why PDF? PDF is already the de facto electronic document standard for high-volume customer communications such as service contracts and monthly bills because it’s portable and provides an unchanging snapshot, a necessity for any kind of recordkeeping. But what about HTML? Why not use that? While HTML is ideal for delivering dynamic web and mobile content such as on-demand, customizable summaries of customer account data, it doesn’t produce discrete, time-locked documents. Plus, HTML doesn’t support archiving or portability, meaning HTML files are not “official” documents that can be stored and distributed as fixed entities. Document content is low-hanging fruit Document inaccessibility is not a problem that organizations need to live with because it can be solved immediately — and economically — with OpenText’s Automated Output Accessibility Solution, the only enterprise PDF accessibility solution on the market for high-volume, template-driven documents. This unique software solution enables telecommunications companies to quickly transform service contracts, monthly bills, product guides, and other electronic documents into WCAG 2.0 Level AA-compliant accessible PDFs. Whatever the data source, our performance numbers are measured in milliseconds so customers will receive their content right when they ask for it. OpenText has successfully deployed this solution at government agencies, as well as large commercial organizations, giving them the experience and expertise required to deliver accessible documents within a short time frame, with minimal disruption of day-to-day business. Fast, reliable, compliant, and affordable, our automated solution can help you serve customers and meet your compliance obligations. Learn more about the OpenText™Automated Output Accessibility solution.

Read More

Banking Technology Trends: Overcoming Process Hurdles

Financial analytics

Editor’s Note: This is the second half of a wide-ranging interview with OpenText Senior Industry Strategist Gerry Gibney on the state of the global financial services industry and its technical needs.  The interview has been edited for length and continuity. Unifying Information for Financial Reporting I heard a lot of discussion at the SIBOS 2016 conference in Geneva around financial reporting. Banks face procedural hurdles, especially if they’re doing merchant or commercial banking.  A lot of them still have manual processes. In terms of the procedures, the bigger the bank, the bigger the problem. That’s because the information is often in many places. For example, different groups from the bank may approach their corporate banking customers to buy or use a service or product – which is great, but they have to track and report on it. Often in the beginning, these separate group reporting processes are manual. Eventually, they’ want to automate the reporting and join the information to other data sources, but that’s the big challenge – it takes time to assemble and coordinate all the information streams and get them to work as an internal dashboard.  A similar challenge is creating portals to track financial liquidity. Another example is where clients ask for specific reports. The bank don’t want to say no, so they have to produce the reports manually, often as a rush job, and in a format that the client finds useful. The challenge is to take large amounts of data and summarize so you can give people what they ask for with the periodicity, the look, and the format that they want. Embedded Visualizations for our Customers’ Customers That’s where we come in. A lot of the value we offer with OpenText Analytics is embedding our analytic and visualization applications in a client’s own application so that they can offer dashboards, windows, reporting and so forth, to their own internal or external customers. The beauty of our embeddable business intelligence or analytics piece is that no one on the business side has to see it or work with it. It offers functionality that can be applied as needed, without having to make IT adjustments on your part or requiring people to enter data into bulky third-party programs. Tremendous capabilities are suddenly just there. Users can build a data map that automatically gathers and manages data, then organizes and reports it – in any format required, whether visual via charts and graphs, or numeric, if you prefer. Plus, it has powerful drill-down ability. Flexibility to Cope with Regulatory Shifts The other aspect of reporting is reporting to regulatory agencies. After the Great Recession and the banking crisis, governments worldwide have been stepping up their efforts in regulating the financial industry. Not just nations – local governments also. In fact, the fastest-growing department in every bank now is regulatory compliance. There are ever-increasing workloads, more workflow but without more people to deal with it. The problem for the U.S. government is that it presents a moving target. Dodd-Frank controls and the Volcker Rule, required banks to end proprietary trading. There is potentially a new level of risk from government changes in requirements and the need for banks to produce new reports, even sometimes on things that they weren’t aware they needed to report on. Banks and other financial institutions need a reporting solution that enables quick and easy production of whatever information government regulators are asking for.  An ideal reporting solution will maximize the flexibility in how you can look across both unstructured data and all the structured data in multiple silos. This is a good use case for ad hoc analytics and reporting – the power to create new types of reports whatever regulators may require. Financial Analytics: Understanding Your Customer Another analytics-related topic I heard at SIBOS was the need to understand customers better and how to identify a good target customer. This is top-of-mind for banks. I’m amazed that people gather streams of data in their CRM systems and then don’t use it. Often their CRM systems are stand-alone, not connected to anything. They might contain information that’s extremely valuable and could enhance their efforts. For example, sales efforts, proposals, and pitch books. They could tie these things together, and then analyze their findings to correlate sales resources to the results. With a unified analytics flow, you can drive business by managing client relationships, figuring out through advanced analytics who is the best candidate for up-selling or cross-selling, as well as identifying new customers. Finding new insights by searching all these CRM systems is a tremendous value that analytics, especially embeddable analytics from OpenText, can deliver. Analytics can have tremendous amount of value to business operations and make them more efficient, productive, and profitable. You can’t ask more than that. To learn more about how OpenText Analytics can help the financial services industry unlock the business value of existing data, consider our Webinar, “Extracting Value from Your Data with Embedded Analytics,” Wednesday, Dec. 14, at 11 a.m. Pacific Time/2 p.m. Eastern.  Click here for more information and to register.

Read More

Banking Trends from SIBOS: Technology Solutions to Tame Rampaging Workflows

banking trends from SIBOS

Editor’s Note: Gerry Gibney, Senior Industry Strategist at OpenText and resident expert on the financial services industry, was recently interviewed on the banking trends and technical needs he discovered at SIBOS (the annual trade show hosted by SWIFT, provider of global secure financial messaging services).  I always come back from SIBOS having learned new things, it’s one of the largest banking events in the world and this year, one of the big topics was domestic payments. Many people aren’t aware that for large banks, corporate internet banking payments represent around 24% of their revenue. They benefit from payment money while it is in their hands and they can charge fees for the payment services. It’s a big market because payments have to be made, whether regular payments such as rent and utilities on buildings or one-time money transfers. And they add up. For bigger banks, we’re talking several hundred million dollars each. Of course, they would prefer to keep that balance in their bank or extract it over time. I see a big role for OpenText here. Our BPM solution can be deployed to help with business networks, so banks can manage the workflow, the processes, and the controls. Managing the controls is important because with the SWIFT processes (payments and messaging), issues include: Who is authorized to send the money? Who else can do it? Who else can approve it? What if that person leaves? How do we add them into the system or remove them? Automating Banking Workflow Our own experience at OpenText is typical. Every year, our company  goes through the payment permissions updating process. What do we need to know? What do we need to get? How do we get it? Where do we apply it? How many accounts are responsive? Doing business in, say, Hong Kong, Shanghai, or Japan, we may have 10 or 20 people with different signatory levels, each needing to sign an eight page statement. Eight pages times 10 people, every year, for every account – that’s 80 pages per account every year, and that’s typical of many companies. A company might well have several hundred accounts with just one bank, and this has to be managed every year, with ever changing rules, like regulators now requiring the CFO’s home address for example. Another workflow example is client onboarding, which has to be done every time. Even if the customer has 200 accounts and they want to add number 201, you still have to go through the onboarding process. So all the information is out there in different places, who knows how well protected it all is? OpenText’s security capabilities, our ability to add workflow, control workflow, minimize, and automate it, adds a lot of value. OpenText is also a SWIFT service bureau. We help with payments reporting, via EDI and our Business Network, to enhance what banks do. We help banking in many areas, across all our solutions – for example, with analytics, on the content side for unstructured data, or helping with records management, which is strong on compliance. With embeddable analytics we can gather all sorts of information, whether it’s for bank employees internally or their clients and customers. This information can be transformed into reports, perform sophisticated analysis, and help companies find new ways to get revenue from it. It can also help to track things more efficiently, comply with government regulations more easily, and improve bottom line without increasing operating costs. In summary, it can be a tremendously powerful component of a bank’s overall offering. The second half of this interview will be published next week.

Read More

Data Quality is the Key to Business Success

data quality

In the age of transformation, all successful companies collect data, but one of the most expensive and difficult problems to solve is the quality of that information. Data analysis is useless if we don’t have reliable information, because the answers we derive from it could deviate greatly from reality. Consequently, we could make bad decisions. Most organizations believe the data they work with is reasonably good, but they recognize that poor-quality data poses a substantial risk to their bottom line. (The State of Enterprise Quality Data 2016 – 451 Research) Meanwhile, the idiosyncrasies of Big Data are only making the data quality problem more acute. Information is being generated at increasingly faster rates, while larger data volumes are innately harder to manage. Data quality challenges There are four main drivers of dirty data: Lack of knowledge. You may not know what certain data mean. For example, does the entry “2017” refer to a year, a price ($2,017.00), the number of widgets sold (2,017), or an arbitrary employee ID number? This could happen because the structure is too complex, especially in large transactional database systems, or the data source is unclear (particularly if that source is external). Variety of data. This is a problem when you’re trying to integrate incompatible types of information. The incompatibility can be as simple as one data source reporting weights in pounds and another in kilograms, or as complex as different database formats. Data transfers. Employee typing errors can be reduced through proofreading and better training. But a business model that relies on external customers or partners to enter their own data has greater risks of “dirty” data because it can’t control the quality of their inputs. System errors caused by server outages, malfunctions, duplicates, and so forth. Dealing with dirty data? Correcting a data quality problem is not easy. For one thing, it is complicated and expensive; benefits aren’t apparent in the short term, so it can be hard to justify to management. And as I mentioned above, the data gathering and interpretation process has many vulnerable places where error can creep in. Furthermore, both the business processes from which you’re gathering data and the technology you’re using are liable to change at short notice, so quality correction processes need to be flexible. Therefore, an organization that wants reliable data quality needs to build in multiple quality checkpoints: during collection, delivery, storage, integration, recovery, and during analysis or data mining. The trick is having a plan Monitoring so many potential checkpoints, each requiring a different approach, calls for a thorough quality assurance plan. A classic starting point is analyzing data quality when it first enters the system – often via manual input, or where the organization may not have standardized data input systems. The risk analyzed is that data entry can be erroneous, duplicated, or overly abbreviated (e.g. “NY” instead of “New York City.)”  In these cases, data quality experts’ guidance falls into two categories. First, you can act preventively on the process architecture, e.g. building integrity checkpoints, enforcing existing checkpoints better, limiting the range of data to be entered (for example, replacing free-form entries with drop-down menus), rewarding successful data entry, and eliminating hardware or software limitations (for example, if a CRM system can’t pull data straight from a sales revenue database). The other option is to strike retrospectively, focused on data cleaning or diagnostic tasks (error detection). Experts recommend these steps: Analyzing the accuracy of the data, through either making a full inventory of the current situation (trustworthy but potentially expensive) or examining work and audit samples (less expensive, but not 100% reliable). Measuring the consistency and correspondence between data elements; problems here can affect the overall truth of your business information. Quantifying systems errors in analysis that could damage data quality. Measuring the success of completed processes, from data collection through transformation to consumption One example might be how many “invalid” or “incomplete” alerts remain at the end of a pass through of the data. Your secret weapon: “Data provocateurs” None of this will help if you don’t have the whole organization involved in improving data quality. Thomas C. Redman, an authority in the field, presents a model for this in a Harvard Business Review article, “Data Quality Should Be Everyone’s Job.” Redman says it’s necessary to involve what he calls “data provocateurs,” people in different areas of the business (from top executives to new employees), who will challenge data quality and think outside the box for ways to improve it. Some companies are even proposing awards to employees who detect process flaws where poor data quality can sneak in. This not only cuts down on errors, it has the added benefit of promoting the idea through the whole company that clean, accurate data is important. Summing up Organizations are rightly concerned about data quality and its impact on their bottom line. The ones that take measures to improve their data quality are seeing higher profits and more efficient operations because their decisions are based on reliable data. They also see lower costs from fixing errors and spend less time gathering and processing their data. The journey towards better data quality requires involving all levels of the company. It also requires assuming costs whose benefits may not be visible in the short term, but which eventually will end up boosting these companies’ profits and competitiveness.

Read More

Attention All Airlines: Is Your Inaccessible Document Technology Turning Away Customers?

accessible PDF

Imagine you’re an airline executive and a small but significant percentage of your customers—let’s say 10% or less—download flight itineraries and boarding passes from your website only to find that the information in these documents was jumbled up and, in some cases, missing altogether. What would you do? Would you be concerned enough to take action? Would it matter if these customers didn’t know their flight number, boarding gate, and seat assignment? After all, 90% or more of your customers would still be receiving this information as usual. Before venturing an answer to these hypothetical questions, let’s pause for a quick look at your industry. Over the last 60 years, airline profit margins have averaged less than 1%, though the situation has been improving in recent years. The International Air Transport Association (IATA) reported a net profit margin of 1.8% in 2014 and 4.9% in 2015; industry profits are expected to be 5.6% in 2016. With such narrow margins, it’s clear that airlines need every customer they can get, and the industry has little tolerance for inefficiencies. Now back to your document problem… Even if less than 10% of customers were affected, it seems likely that you’d take steps to fix the problem and also pull out the stops to get it done as fast as possible, before the company loses many customers. Of course, the underlying assumption here is that a proven, economically feasible IT solution is available. This might be happening at your airline—for real All hypotheticals aside, a scenario like this could actually be playing out at your company right now. Consider: According to the 2014 National Health Interview Survey, 22.5 million adult Americans—nearly 10% of adult Americans—reported being blind or having some sort of visual impairment. To access online flight booking tools, along with electronic documents such as itineraries and boarding passes, many of these people need to use screen reader programs that convert text into audio. If, however, the documents aren’t saved in a format like accessible PDF (with a heading structure, defined reading order, etc.), they’re likely to come out garbled or incomplete in a screen reader. Of course, visually impaired customers could book their flights by phone and opt to receive Braille or Large Print documents in the mail (expensive for your airline). Then again, theoretically, all of your other customers could book by phone, too. The point is you don’t really want customers booking by phone because your self-serve website is less costly to operate than customer call centers; electronic documents are cheaper than paper and postage, and much cheaper than Braille and Large Print. So, wouldn’t it be nice if there was an affordable technology solution that you could plug in to serve up the documents that all of your customers—that’s 90% plus 10%—need to fly with your airline? Or course, it would be even better if the solution met the requirements of new Department of Transportation (DOT) rules implementing the Air Carrier Accessibility Act (ACAA), which have a compliance deadline of December 12, 2016. Customer satisfaction and regulatory compliance? Now that would be good. OpenText Automated Output Accessibility Solution OpenText has the only enterprise PDF accessibility solution for high-volume, template-driven documents. This unique software solution can dynamically transform electronic documents such as e-ticket itineraries/receipts and boarding passes into accessible PDFs that comply with the DOT’s new ACAA rules. Designed to be inserted between the back office (e.g., a passenger reservation system) and a customer-facing Web portal, OpenText™ Automated Output Accessibility Solution has minimal impact on existing IT infrastructure. Even better, the solution generates WCAG 2.0 Level AA compliant output that has been tested and validated by prominent organizations and advocacy groups for visually impaired people. OpenText has successfully deployed this solution at government agencies, as well as large commercial organizations, giving them the experience and expertise required to deliver accessible documents within a short time frame, with minimal disruption of day-to-day business. As the de facto electronic document standard for high-volume customer communications, PDF format offers both portability and an unchanging snapshot of information, necessities for a document of record. Contact us to discuss more about how we can help you deliver accessible, ACAA-compliant PDF documents to your customers. Remember, the DOT’s deadline is December 12, 2016.

Read More

Fintech at SIBOS: From Everyday Banking to Science Fiction

FinTech

In late September, we were at SIBOS 2016, the annual trade show hosted by SWIFT. It’s a major showcase for innovative financial technology (“Fintech”), featuring the latest news and analysis about banking and electronic payments. Over 8,500 people attended this year, with Eurasian, Middle Eastern, African, and Chinese banks showing a stronger presence than in the past. The show was in Geneva, and we were struck by the contrast between the timeless beauty of the Alpine setting (snow-covered Mont Blanc looming over the skyline, Lac Leman at the foot of our hotel, and our daily commute passing vineyards) and the dynamic, even futuristic, requirements of the financial industry in the 21st century discussed at the conference. SIBOS has been morphing as SWIFT seeks out new directions. It offers a showcase for Fintech through its Innotribe program, continues to develop electronic communications standards for a wide range of uses, and this year was focusing more on cyber-security (after hackers exploited a SWIFT weakness to steal $81 million from the central bank of Bangladesh’s account at the Federal Reserve) to defend the SWIFT electronic transaction platform. We also heard a lot of buzz about new Fintech developments such as cybersecurity and artificial intelligence (e.g. creating algorithms that could manage simple investment portfolios as well as most human advisors). Blockchain: From cyberpunk to competitive advantage Of all these new technologies, blockchain was a key discussion at SIBOS. The conference was full of questions about blockchain: How does it work?, what does it mean for us?, how will payments be affected?, who is working with it?, and what are the real security issues? “Blockchain” is a type of distributed, online database that keeps a permanent, tamper-proof record of financial transactions. It was created to settle trades in Bitcoin, the virtual currency, and has since become popular for deals in “real” currencies because all parties can track the transaction securely, with no need for third-party verification. SWIFT is interested in blockchain technology even though – or maybe because – it could pose strong competition to SWIFT’s own secure payments service, which can take days or weeks to settle a complex transaction. Fintech competitors using blockchain are forecasting that they will be able to cut transaction time down to near-zero. “Let’s get it done” However, what interested us most about SIBOS 2016 is that despite all that buzz, there was still a core of business as usual – practical, “let’s get it done” business. The banks still face the same issues: “How do we do this payments business faster, cheaper, and take better advantage of the relationships it creates?” For example, client onboarding continues to be a challenge for many banks, and we provide a lot of value in this area. We had many discussions about how OpenText helps banks to improve their overall compliance, get to revenue sooner, and achieve higher customer satisfaction rates. We also had conversations about the wealth of data that banks have and how they can enable better use of it, for both themselves and their customers. While they may be experimenting with new technologies driven by the Fintech boom, the practical business in the next cycle will be in establishing value from the networks and relationships already in place. OpenText’s role in the new Fintech world Naturally we feel OpenText has a role to play here. To start with, OpenText™ Analytics solutions help financial companies extract more value from their information by liberating it from the many separate silos it’s often housed in and integrating these various streams of information into a complete, accurate picture of investment performance, customer satisfaction, user experience, or response to various marketing incentives. Our message attracted a lot of interest at SIBOS, where we had great meetings with clients and prospects. One of the highlights was our Oktoberfest party co-sponsored with SAP, It was a great success with more than 260 people attending. Next year, SIBOS will take place in Toronto, Canada’s financial capital – not far from our corporate headquarters in Waterloo.  Who knows what strides the Fintech world will have taken by then?

Read More

How the Same Data Analytics Used in Politics Can Help Your Business

Every four years, about 230 million people turn out to vote in the U.S presidential election. That’s a huge audience to influence, no matter how sophisticated your analytics. I mean, which data pool on which channel do you start with? Where do you look first? Just like businesses trying to reach customers, political campaigns can analyze sentiment, social media activity, and feedback from a variety of channels to spot trends, make predictions and craft persuasive messages. A new OpenTextVoice article on Forbes.com, 3 Things The Presidential Race Can Teach Business About Digital Strategy, explores how the same micro-targeting used in political data analytics can be applied in business. As November draws near, the data crunching will undoubtedly grow more feverish. To get a sense of how often the media is covering each candidate and in what context, check out the free online Election Tracker ’16 from OpenText. To read the full article on Forbes.com, go here.

Read More

Power Up Your iHub Projects with Free Interactive Viewer Extensions

Interactive Viewer

We’ve updated and are republishing a series of helpful tips for getting the most out of the Interactive Viewer tool for OpenText™ iHub, with free extensions created by our software engineers.  These extensions boost the appearance and functionality of Interactive Viewer, the go-to product for putting personalized iHub content in the hands of all users.  (If you don’t already have iHub installed, click here for a free trial.) Below are links to the full series of six blog posts.  If you have any suggestions for further extensions or other resources, please let us know through the comments below. 1. Extend Interactive Viewer with Row Highlighting A simple jQuery script for highlighting the row the user’s pointer is on. 2. Extend Interactive Viewer with a Pop-Up Dialog Box  Add a fully configurable pop-up dialog box to display details that don’t need to appear in every row. 3. Extend iHub Reports and Dashboards with Font Symbols  Dress up your iHub reports with nifty symbols. 4. Extend iHub Dashboards with Disqus Discussion Boards Encourage conversation the easy way, by embedding a discussion board in the popular Disqus format. 5. Extend iHub Interactive Viewer with Fast Filters  Make column-based filtering easy by using JSAPI to build a Fast Filter – a selectable drop-down menu of distinct values that appears in the header of a column. 6. Extend Interactive Viewer with Table-Wide Search   Filter across multiple columns in an iHub table by creating a table-wide search box.

Read More

Your Medical Biography is a Digitized Trail of Big Data

Healthcare

A far cry from the walls of paper files in medical offices, today’s digitized records let patients see lab results, get refill alerts and generally have more perspective and control over their medical destiny. These new systems are even using natural language processing algorithms to grasp the nuances of language, just like humans do. A new OpenTextVoice article on Forbes.com, Big Data Takes Turn As Storyteller, Pulls Together Health Narratives, shows how the digitization of health records is changing the way our medical histories are recorded. Now, we can save and analyze not only structured data like dates and procedure codes, but also unstructured information like doctor’s notes. All of these advancements are changing the medical paradigm to create a partnership between doctor and patient. It’s even supporting more sophisticated analysis, so that a treatment plan can be one based more on the full picture rather than just a snapshot. Get the full story here.

Read More