Compliance

Why Information is the New Currency

We live in a digital world. A testament to this new reality is the growing value of digital content. People download songs, purchase movies online, exchange emails, and share personal information—all in the form of digital content. Information in its many new forms has become commoditized. In a digital world, information is the new currency. Will it replace the dollar, the Euro, the Yen? Not yet, but as information flows across networks, as it is exchanged and more metadata is collected, it grows in value. New businesses and whole industries are emerging to support the digitization of content. As industry leaders like Google and Facebook have demonstrated, opportunities to monetize information are abundant. Like money, data can be stolen. As information grows in value, so will the need to protect and manage it—and this will be increasingly mandated by governments and regulatory bodies. Many large companies (health care providers, governments, and banks, to name a few) are the gatekeepers of highly confidential, personal information. They are susceptible to information leaks. In a digital world, how will government and regulators monitor and protect the huge amounts of personal data stored in the Cloud?  As society becomes digital and the Internet propagates a faster pace of crime, organizations will need to focus on the development and enforcement of governance policies, standards, and systems to prevent identity theft and online fraud. The mass digitalization of products, services, processes, and overall business models will demand a disciplined approach to managing, governing, and innovating with information. Enter Enterprise Information Management, or EIM. EIM is a set of technologies and practices that maximize the value of information as it flows across networks, supply chains, and organizations. Its core technologies work together to create an end-to-end platform for sharing, collaboration, analysis, and decision-making, based on the effective management of information to harness its potential while mitigating risk through governance, compliance, and security. EIM delivers a long list of benefits for the enterprise, including reduced costs, increased transparency, improved security and compliance, optimized productivity and efficiency—but the overarching benefit that EIM gives to organizations is the ability to simplify their operations, transform their processes and information, and accelerate business and agility to innovate at the speed of digital.  In a digital world, information will play a fundamental role in empowering the enterprise. Digital leaders will differentiate their products and services based on a strategy that maximizes the potential of digital information. They will use EIM technologies to connect information for better performance, greater opportunity, and deeper insight into their customers. I’ll take a closer look at how competitive advantage is created through managing consumer-related information in the following post in this series, “Digital Engagement and the New Consumer”. Find out how you can capitalize on digital disruption.  To learn more, read my book, Digital: Disrupt or Die.

Read More

Digital-First Fridays: Information is the New Currency

We live in a digital world. A testament to this new reality is the growing value of digital content. People download songs, purchase movies online, exchange emails, and share personal information—all in the form of digital content. Information in its many new forms has become commoditized. In a digital world, information is the new currency. Will it replace the dollar, the Euro, the Yen? Not yet, but as information flows across networks, as it is exchanged and more metadata is collected, it grows in value. New businesses and whole industries are emerging to support the digitization of content. As industry leaders like Google and Facebook have demonstrated, opportunities to monetize information are abundant. Like money, data can be stolen. As information grows in value, so will the need to protect and manage it—and this will be increasingly mandated by governments and regulatory bodies. Many large companies (health care providers, governments, and banks, to name a few) are the gatekeepers of highly confidential, personal information. They are susceptible to information leaks. In a digital world, how will government and regulators monitor and protect the huge amounts of personal data stored in the Cloud? As society becomes digital and the Internet propagates a faster pace of crime, organizations will need to focus on the development and enforcement of governance policies, standards, and systems to prevent identity theft and online fraud. The mass digitalization of products, services, processes, and overall business models will demand a disciplined approach to managing, governing, and innovating with information. Enter Enterprise Information Management, or EIM. EIM is a set of technologies and practices that maximize the value of information as it flows across networks, supply chains, and organizations. Its core technologies work together to create an end-to-end platform for sharing, collaboration, analysis, and decision-making, based on the effective management of information to harness its potential while mitigating risk through governance, compliance, and security. EIM delivers a long list of benefits for the enterprise, including reduced costs, increased transparency, improved security and compliance, optimized productivity and efficiency—but the overarching benefit that EIM gives to organizations is the ability to simplify their operations, transform their processes and information, and accelerate business and agility to innovate at the speed of digital. In a digital world, information will play a fundamental role in empowering the enterprise. Digital leaders will differentiate their products and services based on a strategy that maximizes the potential of digital information. They will use EIM technologies to connect information for better performance, greater opportunity, and deeper insight into their customers. I’ll take a closer look at how competitive advantage is created through managing consumer-related information in the following post in this series, “Digital Engagement and the New Consumer”. Find out how you can capitalize on digital disruption. To learn more, read my book, Digital: Disrupt or Die.

Read More

5 Ways to Simplify eDiscovery

CX

Many organizations are failing at keeping up with their unstructured data. Legal and IT are finding themselves dealing with unmanaged data growth, often making eDiscovery a monumental task that eats up valuable resources. With legal and IT budgets being constricted, it can be hard to manage rising and unpredictable eDiscovery costs. Being proactive, rather than reactive is key. Below are five ways you can simplify the eDiscovery process and substantially reduce cost. #1 Avoid Legal Jargon Legal holds are meant to be acted upon by employees. In order to streamline the eDiscovery process, legal and compliance teams need to understand how employees talk and the issues related to their documents. Draft your legal hold notices in a way that everyone can understand so they know what to do, and fully understand expectations. #2 Anticipate Risk While you may be tempted to cross your fingers and simply hope no disputes arise, anticipating and planning for potential risks can prove to be very rewarding. Data and documents can be categorized based on potential risks, such as trademark disputes, with all corporate naming and branding documentation stored in a separate location. If and when litigation arises, your preparedness will pay off. #3 Early Case Assessment Once data is properly preserved and collected, it’s extremely important to condense the information down to a more manageable size. Actively culling the the data during document review will save you more time and money during the eDiscovery Process. #4 Integrate Solutions eDiscovery was formerly a “piecemeal” solution. Separate tools such as early case assessment and document review functioned as software packages which demanded the risky business of information transfer. Integrating all eDiscovery components under one platform is not only the safest route, but the simplest as well. #5 Hire Experts An eDiscovery solution provider can best advise and consultant clients through the assessment, selection and implementation processes. These experts can help you anticipate and plan for issues before they arise, and should essentially become an extension of your internal team. I think what it all boils down is being proactive. eDiscovery costs are substantially reduced when you’re readily prepared for an eDiscovery request by implementing the right tools and team. Asking for advice now and implementing technology might save you headaches and hundreds of thousands if not millions in eDiscovery costs and potential litigation disputes.

Read More

Compliance violations for faxing and Windows Server 2003 users

If your organization or users are still using Windows Server 2003 after July 14, 2015, be prepared for the consequences. Since Microsoft will end support for Windows Server 2003 this month, anyone still using Windows Server 2003 is at risk of a security and exposure breach. Malware and cyber threats can go undetected in unsupported operating systems, which alone is a huge risk for organizations. However, did you know that these risks also put an organization in danger of non-compliance with several regulations such as HIPAA, PCI, Sarbanes-Oxley and others? Running unsupported operating systems, such as Windows XP, might be enough to make the Federal government take a closer look at organizations which are bound by these important regulations. This non-compliance translates to any fax server infrastructure that may be running on Windows Server 2003. If you have a fax server deployed with Windows Server 2003, take a deep breath, and call OpenText. If you need an on-premises fax server running on either Windows 2008 or Windows 2012, we’ve got you covered. Or eliminate the need for any operating system for your faxing by using OpenText Fax2Mail, an enterprise-grade, 100% cloud fax service. We can do that, too. Either way, don’t let your operating system put your faxing operations at risk of non-compliance. For more about the end of support for Windows Server 2003, find more information here!

Read More

Expanding the Banking Universe with a Mobile-Only Play

Mobile phones continue to help break new ground in the world of Banking. A very interesting example is called Atom Bank, which is in the United Kingdom today, or is it everywhere? Atom Bank was recently awarded a banking license and plans to commence their operations later this year. They have already raised around 25 million pounds (about $39 million). So why is this so interesting and different? Atom Bank will operate only through a mobile app. That is right, they just have an app. Of course there will be no branches, and they will not have a website initially. This is a strange strategy as how will customers be able to find them unless they read my blogs? They claim that customers will be able to open accounts and carry out all their banking activity using only a smartphone. The also said they want to “set new standards for the banking sector” when it comes to technology. Well this matches quite well to what Millennials are thinking about innovation in banking coming from technology companies, not from banks. Viacom Media Networks did research and came up with the Millennial Disruption Index, which is copied below. Notice that 33 percent of Millennials do not think they will ever need a bank, and nearly half are counting on technology start-ups to overhaul the way banks work. Talk about supply meeting demand, and here comes Atom Bank. Or maybe they should be called Atom Software, the smartphone technology company with an app. Banking on Mobile Source: Viacom Media Research Atom is the latest in a string of technology companies shaking up the banking industry. Who would have thought that Apple would create a payment service a la PayPal? It has certainly done well so far. Anybody know a user or two of Venmo, the under 30 set’s current favorite to make small payments to each other? This is not futuristic; they already exist and work well today. So, will Atom Bank be what Millennials are longing for? Well there are a few challenges, or what we might call complexities. They will need to work with a regular retail bank for mundane things like checking and cash deposits. I don’t know if they will be responsible for Know your Customer (KYC) or will have deposit limits or concern themselves about anti-money laundering and SARS (Suspicious Activity Reports). Perhaps the brick-and-mortar bank they plan to partner with will do the heavy compliance lifting for them. Since Atom Bank is all about a high quality customer experience with a smartphone, they are certainly addressing what Millennials are looking for. They are not yet sharing all aspects of what they plan to do, as they said they do not want to assist potential competitors. They did announce that they will have biometric security, 3D visualizations and gaming technology. Sounds like fun! An app on your smart phone will do all of that? Mobile phones, now smartphones, have come a long way. Even if all of this works as planned and Atom Bank is very successful, they will have fierce competitors from startups as well as established organizations. But if they capture the hearts, minds and bank accounts of the Millennials before others do, they will be very successful and the reality of the Millennial Disruption Index will become even more obvious.

Read More

Achieving Equal Access in Health Care Information

As per a report published by the Equal Rights Center in 2011, blind and visually impaired individuals routinely face barriers in receiving information regarding their health care including documents such as test results, prescriptions, etc., benefits information such as Explanation of Benefits, eligibility and termination information and e-delivered communications such as billing statements, summary of benefits and more in accessible formats. This includes information received by visually impaired Americans being covered by Medicare and Medicaid. These individuals are often presented with work-around solutions, such as relying on friends, family or healthcare practitioners to read their private medical information to them. Not only is this a breach of the individual’s privacy, but also leads to outcomes that could result in poor health and loss of benefits. The Centers for Medicare and Medicaid (CMS), an agency of the US Department of Health and Human Services, is the largest single payer for health care in the United States. As per data from the CMS: 90 Million Americans receive healthcare coverage through Medicare, Medicaid and the State Children’s Health Insurance Program. Approximately 4.3 million individuals over the age of 65 report some form of visual impairment. There are also approximately 700,000 Medicare beneficiaries between the ages of 21 and 64 who have some form of visual impairment. Private healthcare insurers have been contracted by the Centers for Medicare and Medicaid Services to offer Medicare and Medicaid programs, and these insurance providers must meet federal regulation i.e. Section 508, requiring that they ensure access to and use of their websites and digital documentation to people with disabilities, including the blind or visually impaired individuals. Non-compliance could lead to penalties and the loss of lucrative contracts for insurers. It is therefore no surprise that document (e.g. PDF) accessibility is a hot-button issue for government and even private healthcare insurers contracted by the CMS. As “public accommodations” under the Americans with Disabilities Act (ADA), healthcare insurers are generally well aware of their legal responsibility to customers with disabilities such as visual impairment, and are quite used to complying with these regulations. But now that accessibility requirements are expanding into cyberspace, healthcare insurers need to find appropriate technology solutions for this new challenge. Until a couple of years ago, it simply had not been possible for healthcare insurers to create high-volume, communications and documents in accessible PDF format. The sheer scale of production, with documents numbering in the thousands or millions, precludes manual remediation because of several limiting factors: Costs of manually remediating documents Delivery time due to the laborious nature of manual remediation Stringent accessibility tagging requirements OpenText has created an automated, software-based solution to address these very limitations. The OpenText Automated Output Accessibility solution can generate accessible PDFs from any high-volume, system-generated input print stream or other formats quickly and efficiently, while keeping storage size at bay. The solution was designed using thousands of man-hours worth of very specific experience and expertise in the system-generated document accessibility space, and our industry-leading transformation engine enables generating accessible output in the milliseconds. In fact, the output generated from this solution has been reviewed by the National Federation of the Blind and other prominent organizations for the visually impaired. Learn more about the OpenText Automated Output Accessibility solution at http://ccm.actuate.com/solutions/document-accessibility.

Read More

Big Data Is Still a Game Changer, but the Game has Changed. Here’s How.

Not long ago, organizations bragged about the large volume of data in their databases. The implied message from IT leaders who boasted about their terabytes and petabytes and exabytes was that company data was like a mountain of gold ore, waiting to be refined. The more ore they had, the more gold – that is, business value – they could get out of it. But the “bigness” of Big Data isn’t the game changer anymore. The real competitive advantage from Big Data lies in two areas: how you use the data, and how you provide access to the data. The way you address both of those goals can make or break an application – and, in some cases, even make or break your entire organization. Allow me to explain why, and tell you what you can do about it – because mastering this important change is vital to enabling the digital world. How Big Data Has Changed Each of us – and the devices we carry, wear, drive, and use every day – generate a surge of data. This information is different from Big Data of just a few years ago, because today’s data is both about us and created by us. Websites, phones, tablets, wearables and even cars are constantly collecting and transmitting data – our vital stats, location, shopping habits, schedules, contacts, you name it. Companies salivate over this smorgasbord of Big Data because they know that harnessing it is key to business success. They want to analyze this data to predict customer behavior and likely outcomes, which should enable them to sell better (and, of course, sell more) to us. That’s the “how you use data” part of the equation – the part that has remained pretty consistent since market research was invented more than 100 years ago, but that has improved greatly (both in speed and precision) with the advent of analytics software. Then comes the “how you provide access to data” part of the equation – the part that highlights how today’s user-generated Big Data is different. Smart, customer-obsessed businesses understand that the data relationship with their consumers is a two-way street. They know that there is tremendous value in providing individuals with direct, secure access to their own data, often through the use of embedded analytics. Put another way: the consumers created the data, and they want it back. Why else do you think financial institutions tout how easily you can check balances and complete transactions on smartphones, and healthcare companies boast about enabling you to check test results and schedule appointments online? Making your data instantly available to you – and only to you – builds trust and loyalty, and deepens the bond between businesses and consumers. And like I said earlier, doing so is vital to enabling the digital world. The New Keys to Success But when a business decides to enable customers to access their data online and explore it with embedded analytics, that business must give top priority to customers’ security and privacy concerns. In a blog post, “Privacy Professor” Rebecca Herold notes that data breaches, anonymization and discrimination rank among the Top 10 Big Data Analytics Privacy Problems. Her post is a must-read for organizations that plan to provide data analytics to customers. To underline Herold’s point, Bank Info Security says that personal data for more than 391.5 million people was compromised in the top six security breach incidents in 2014 – and that number does not include the Sony breach that made headlines. Security and privacy must be a primary consideration for any organization harnessing Big Data analytics. Remember what Uncle Ben said to Peter Parker: “With great power comes great responsibility.” Meeting the privacy and security challenges of today’s user-generated Big Data requires a comprehensive approach that spans the lifecycle of customer data, from generation through distribution. If you want guidance in creating such an approach, check out the replay of a webinar I presented on June 23, Analytics in a Secure World. My colleague Katharina Streater and I discussed: The drivers and trends in the market What top businesses today do to ensure Big Data protection How you can secure data during content generation, access, manipulation and distribution Strategies for complying with data security regulations in any industry If you watch the replay, you’ll come away with great ideas for securing data from the point of access all the way through to deployment and display of analytic results. We explained why a comprehensive approach minimizes the risk of security breaches, while simultaneously providing a personalized data experience for each individual user. We closed the program by explaining how OpenText Analytics and Reporting products have the horsepower required to handle immense volumes of data securely. We showed how the OpenText Analytics platform scales to serve millions of users, and explained why its industrial-strength security can integrate directly into any existing infrastructure. Please check out Analytics in a Secure World today. Privacy Please image by Josh Hallett, via Flickr.

Read More

8 Top Considerations for a Successful Cloud Partnership

As your organization embarks on creating a cloud strategy, there are many things to consider. What are the top benefits you are looking to achieve by moving workloads into the cloud? What areas of your business are you willing to consider as cloud or hybrid cloud implementations? Most of all, as you move into the cloud you are faced with extending your infrastructure and IT team to include your cloud provider. What are the key things you should consider as you determine the key cloud partnerships you will embrace? One Size Does Not Fit All Developing a cloud strategy is an exercise in understanding your business process, workloads, security rules, compliance requirements, and user adoption–just to name a few. Not a simple task, moving business to the cloud might mean a combination of both deployment and costing strategies. Options for on-premises, public cloud, private cloud (both on and off site), Hybrid cloud, SaaS, PaaS, and IaaS all offer organizations flexibility but can also be confusing. What offering is ideal for which business process to generate the highest efficiency and cost benefit? There is no one-size-fits-all solution, which means IT leaders need to be prudent about cloud options and work with their vendors to ensure they get the most value for their investment. Information Matters As we live in the digital age, many organizations recognize the value of information and place significant priority on protecting it. Information Governance, knowing what information you have, where it is and what you need to do with it, has never been more important. When organizations look at moving information to the cloud they need to be extra vigilant to ensure that their information policies and compliance regulations are both understood and upheld by their cloud partner. The value and level of control required over information should play a part in an organization’s decision of what applications and what data will reside in the cloud, on premises or as part of a hybrid cloud implementation.

Read More

Upcoming Accessibility Deadlines for Federal Student Loan Statement Servicers

Section 508 and WCAG compliance has been an important mandate for the U.S. Federal Government, and the Department of Education is one of the government agencies actively working towards meeting these requirements for visually impaired students receiving federal loans. The Department of Education has now issued time frames and deadlines for WCAG compliance to student loan servicers that generate and distribute federal Direct Loan Program (DLP) statements, notices and communications. Accordingly, all loan statements, notices and communications, forms and websites need to be made available to borrowers in accessible read-only HTML format or non-proprietary equivalent (e.g. accessible PDF), complying with Section 508 of the Rehabilitation Act and WCAG 2.0, within a few short months. The Federal Government has also established additional time frames for testing and verification of accessibility compliance before the actual deadline, for making accessible content available to borrowers. Loan service providers typically generate statements, notices and communications in print stream or PDF formats. Making these accessible using traditional methods is manual, laborious, time-consuming and expensive. Visually impaired students therefore typically experience a lag in receiving critical information included in their statements, notices and communications in formats they can access due to the time lines and delays involved in generating manually tagged accessible PDFs, Braille, Large Print, Audio or HTML formats. This is far from ideal, and visually impaired students are now expecting to be treated the same as anyone else would be with regards to the timely availability of important information. Most Federal Student Loan Statement Servicers are still struggling to find a solution that meets compliance and can be made operational before the required deadlines. While building a new solution from scratch is often how IT departments approach technology challenges, given the tight timelines involved and the level of accuracy, expertise, testing capabilities and technological know-how required in building a solution, it is not an efficient way of addressing this particular requirement. Key points for Federal Loan Service Providers to consider when implementing an automated accessibility solution: The solution should be easy to implement and should not require management of multiple formats Storage costs should not increase as a result of the solution The solution should be infinitely scalable and be able to support the demands of generating millions of documents without performance issues The OpenText Automated Output Accessibility solution addresses each of these requirements, and can generate accessible PDFs from any input print stream format quickly and efficiently, while keeping storage size at bay. The solution was designed using thousands of man-hours worth of very specific experience and expertise in the system-generated document accessibility space, and our industry-leading transformation engine enables generating accessible output in the milliseconds. In fact, the output generated from this solution has been reviewed by the National Federation of the Blind as well as the Department of Education. Using best-of-breed technology and accessibility-specific expertise is the only fool-proof way of meeting the tight time frames and deadlines defined by the Department of Education. Learn more about our solution here.

Read More

Replacing Your Legacy Archiving System is a Pain. No More!

Large organizations rely heavily on rapidly evolving technology to thrive in today’s competitive business environment. And one of these vital solutions is the electronic archiving system, which is expected to maintain a comprehensive and accurate record of customer information such as statements, bills, invoices, insurance policies, scanned images and other organizational information that is essential to the survival and growth of the enterprise. It is critically important for modern organizations that these assets are retained in an efficient and intelligent manner so that they can be retrieved on-demand for customer presentation, compliance, auditing, reporting, etc. Like all information technology, archive systems too need to be upgraded from time to time. Depending on the requirements of a progressive organization, this could even mean replacing the existing systems with a brand new solution. The first step toward an effective solution, however, is identifying the shortcomings of the current system in the context of your evolving business needs. Here are a few tell-tale signs that your archiving system hasn’t been keeping up with your growth: Waning vendor support – It doesn’t receive enough attention from the vendor in terms of upgrades and support. Costly Upgrades – When it becomes prohibitively expensive to boost performance or add new capabilities/features. New Media Deficit – The system falls short on receiving and serving up content to the multitude of customer channels, including web, social, mobile, tablet, text, messages, email, and print. Social Disconnect – Perhaps the most easily recognizable symptom of an outdated archive system is the inability to connect with social media such as Facebook and Twitter accounts and capture and store customer information. Content Inaccessibility – Users complaining of an inability to extract data for targeted messaging, trans-promotional marketing, analytics, and other sales and marketing functions. Compliance Infractions – Inability to store or retrieve content that could lead to investigations, fines, license revocations, or lawsuits. If you can relate to one or more of these issues then upgrading to a more contemporary solution may be the best way forward. An example of archive migrations we have conducted for our customers and have extensive experience in is for the Mobius/ASG-ViewDirect® system. The challenges often highlighted for this system include some of those listed in the points above, as well as other issues typically seen in legacy archive systems, such as a lack of a coherent product roadmap, high costs, and an outdated user experience. Customers are often certain about the need for migration but are unsure about how to move to a new archive without disrupting critical business functions. The only real roadblock to improved performance then, is the migration itself. The process can be laborious and cumbersome, with key performance factors around the ability to perform complex document migrations on time and within budget, while maintaining access for existing applications, repurposing information locked in legacy document formats and meeting regulatory requirements. While enterprise IT departments have stringent migration requirements, modernizing your archiving system doesn’t necessarily have to be painful, and OpenText®’s ECM Migration service has a methodology in place to make sure it isn’t. The service provides a way to efficiently migrate content out of legacy archiving systems like Mobius/ASG-ViewDirect® and others to a more contemporary solution such as OpenText’s Output Archive (formerly known as BIRT Repository). Some of the unique benefits of using OpenText’s ECM Migration Service for Mobius migrations include the ability to migrate content out of Mobius without the need to purchase expensive Mobius APIs and the capability to read directly from the underlying file structure using the Mobius Resource Extractor, bypassing the need for Mobius to be running. Our ECM Migration methodology has been designed keeping best practices gleaned from many successful engagements and utilizes award-winning technologies to automate migration in a lights-out environment without disrupting day-to-day business activities. The ECM Migration team has worked with many ECM systems including IBM® Content Management OnDemand (CMOD), IBM® FileNet® Image Services, IBM®FileNet® P8, ASG-ViewDirect®, and others for decades, and the maturity of our solution proves it. Our technology and DETAIL™ Methodology enables us to: Manage all aspects of a migration Cut up to 6 weeks off of the initial planning Use standard logging on a single platform Provide multi-threaded support out-of-the-box Implement process flows, advanced logic and routers through drag-and-drop interfaces without the need for scripting Connect to and pool the connection with multiple databases and repositories Run processes concurrently, by thread or broken down by stage (i.e. Load, Extract, Convert) Handle massive volumes of data, documents, images and metadata So, if you think it’s time to say goodbye to your current archiving system, know that there are experts out there who can help you define your requirements and deploy an appropriate solution that will take you where you want to go. And remember – organizations that evolve, thrive. Others perish.

Read More

Accessible PDF Discussion: Generating Desktop-Level and Enterprise PDFs

To help them achieve complete web accessibility, organizations require a viable technology solution for automatically generating high-volume (i.e., enterprise-level) personalized customer communications in Accessible PDF format. Such a solution would give blind and visually impaired people immediate and equal access to electronic documents, which they currently do not have. This series of blog posts explains why demand is increasing for customer communication documents in Accessible PDF format, describes current industry practices for producing these documents, and introduces a new technology option that enables organizations to keep PDF format as their default electronic format for high-volume, web-delivered documents of record while enabling full accessibility and usability of those PDFs for individuals who are blind or visually impaired. In a recent blog posts, we examined the Drivers behind Web Accessibility and the Best Practices for PDF and Accessible PDF. In this post, we will look at different approaches to generating PDF and Accessible PDF. How PDFs are Generated PDF documents are either created individually at the desktop level by human operators or in large batches at the enterprise level by sophisticated software applications. Desktop Level At the desktop level, PDF documents are manually created by individuals using word processors such as Microsoft Word, graphic design programs such as Adobe Creative Suite, or other software applications. These low-volume ad hoc documents typically include annual reports, newsletters, marketing collateral, training manuals, program support material, and other public-facing documents. PDF versions of scanned hardcopy documents may also be created at the desktop level. Enterprise Level At the enterprise level, PDF documents are automatically created in large volumes by powerful software applications (e.g., document composition engines) supported by enterprise-grade IT infrastructure such as relational databases, high-speed servers, and large-capacity storage devices. These high-volume documents are typically customer communications such as notices, statements, bills, and invoices that are personalized with customer account data for individual recipients. Because they contain confidential information, customers usually access such documents through secure, password-protected web portals. Reasons Why PDFs are Inaccessible by Default At the desktop level, PDF documents are typically produced by converting existing native documents (e.g., Microsoft Office documents, Adobe Creative Suite documents) into PDF format. During the conversion process, the software application attempts to formulate a tag structure based on the contents of the native document. If the author of the native document has not followed accessibility guidelines, explicitly identifying elements, such as headings, and properly formatting lists, tables, and other items, the software simply assigns a tag structure based on its best algorithmic guess about the document, often resulting in errors. While software applications that generate Accessible PDF output are able to faithfully reproduce the appearance of a native document, they cannot infer a logical reading order or produce meaningful alternative text for graphical elements. When alternative text for images is missing from the native document, the PDF conversion engine assigns generic identifiers, e.g. “Image 54”, which are meaningless when read aloud by a screen reader program. Unfortunately, the automated conversion process itself is error-prone so even when native documents have been designed with accessibility in mind, Accessible PDFs require post-conversion inspection and adjustment by a knowledgeable technician to make their contents and tag structure 100% compliant with the specified accessibility standard (e.g., WCAG 2.0, Level AA). For example, during conversion, tables may be incorrectly tagged as graphics, or table data may become dissociated from its corresponding column or row header. Similarly, text headings may not be detected and lists without active and meaningful destinations may only be recognized as plain text. In short, desktop PDF conversion technology is far from perfect. Regardless of the quality of the native source document, PDFs created on the desktop must always be manually remediated and inspected (page by page) to ensure compliance with the accessibility standards required by assistive technologies. Traditionally, this is an expensive, labor-intensive, time-consuming process. Traditional Approaches to Generating Accessible PDFs Desktop Level Existing low-volume, ad hoc PDF documents that need to be made accessible are manually remediated by human operators using specialized desktop software applications such as Adobe Acrobat Professional, CommonLook, Abbyy, and others. To lower costs and achieve faster turnaround times, many organizations contract out manual remediation to specialized third-party service providers that operate more efficiently than smaller in-house teams. To facilitate new document creation and improve quality, organizations develop standard accessible templates for individual document types (e.g., Microsoft Office) and enforce their use by employees, vendors, and contractors. Once converted to PDF, every page of a new document is subjected to automated and manual accessibility/usability testing and remediation to ensure that it conforms to accessibility standards such as WCAG 2.0, Level AA. Enterprise Level At the enterprise level, organizations use powerful software applications to dynamically generate high-volume PDF communications for online presentment to customers. However, manual remediation does not scale for enterprise production of Accessible PDFs. The large number of documents created every month—thousands, millions, tens of millions, or more—precludes manual remediation as a viable option, if only as an accommodation to the percentage of customers requiring the accessible electronic format. Due to sheer volume, manual remediation is cost and time prohibitive. Until recently, there was no automated technology solution either. Organizations simply had no way to produce high volumes of personalized customer communications in Accessible PDF format. That has now changed. In the next blog post, we will take a look at an innovative software system that automatically remediates and creates high volume Accessible PDFs.

Read More

Data Driven Digest for May 1

Next Monday is Star Wars Day, and we’re getting a jumpstart on the holiday with today’s Data Driven Digest. Our first item analyzes the latest teaser trailer for the next installment – yes, there’s data there – and we remain out of this world to admire a video data visualization by an amateur astronomer. We finally plummet back to earth to visualize the value of our personal data. Crash Course: It’s common practice to start with data and create visualizations. When Rhett Allain saw the new teaser trailer for Star Wars: The Force Awakens, he did just the opposite: he observed the land speeder cruising across a desert planet (which, according to news reports, is not Tatooine) and calculated its speed. Published in Wired Science, Allain’s dissection of the scene is a terrific work of video forensics, and includes all of his underlying data. The Force is strong in this one. Location, Location, Location: While we’re out in space, check out the video above by Tony Rice, a software engineer, amateur astronomer, and “solar system ambassador” for NASA. The video visualizes 24 hours of Global Positioning Satellite (GPS) coverage by tracing the path of all of the GPS satellites and showing the overlap of their signals on the surface of the earth. Rice created the video using JSatTrak and orbital data from NORAD. Rice has also created a similar video showing the “A-Train” – a constellation of Earth-observing satellites that travel in a sun-synchronous orbit. No mind tricks here. Personal Cost: Unlike GPS – which is a single system that works worldwide – the web of privacy and security laws and regulations that cover the globe are a madly mixed bag, and so are people’s valuations of their personal data. In an article in the May issue of Harvard Business Review, authors Timothy Morey, Theo Forbath and Allison Schoop surveyed consumers in the United States, China, India, Great Britain, and Germany about how they value web search history, location, health history and nine other types of personal data. Their chart of variances from country to country (a snippet of which is above; click through) is part of an article well worth reading. Like what you see? Every Friday we share favorite examples of data visualization and embedded analytics that have come onto our radar in the past week. If you have a favorite or trending example, share it: Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: April 24: Global cravings, California roadkill, Bay Area costs, tech skills April 17: Rage in the Iliad, comparing country sizes, 25,000 songs April 10: Stick charts, water for food, UFO sightings, dataviz videos

Read More

How will 3D printers help improve banking KYC?

artificial intelligence

Yes, you just read this title correctly. 3D printers are widely known to offer the potential to be a game changer in the physical supply chain across many sectors and industries. However, the opportunity in Financial Services seems less likely, particularly in any form of real, practical application. Do you agree, or not with my statement here? Well, either way, I suggest you read on to find out more. KYC – Why is it a sensitive subject? Today, every Financial Institution runs their KYC processes themselves, for lots of reasons. At the top of the list of reasons is the reputational and financial risks that remove any appetite to give control of KYC to a 3rd party, such as a shared utility or data service. A bank caught by its regulator servicing the wrong client is usually exposed to millions or billions of dollars in fines. As we’ve seen in the news over the last 3 years, such occurrences fall into the public domain, usually with lasting reputational damages. Approach to KYC nowadays and alternatives The typical KYC process is executed manually, leveraging a combination of paperwork, de-materialisation and archiving. Overall, it is a costly and lengthy process that happens every time a new client comes on-board, when a new signatory is allowed into the relationship. It also needs to be refreshed and verified regularly. KYC processes also delay the “time-to-revenue”; typically the period of time between the agreed contract and the first day of transaction processing. A number of initiatives have been introduced over the last few years, trying to tackle this challenge from several angles. One common method that keeps coming back is to enable KYC to be done once and for all, and shared between all Financial Institutions and Counterparties. This idea of a shared utility, which would enable a client, counterparty, as a business or as an individual, to “passport” its KYC identity across all its financial suppliers. The benefits and advantages seem very compelling and include: reduced costs of processing KYC, reduced time-to-revenue for the Financial Suppliers, less hassle for the clients and counterparties. It’s a win-win for everybody, isn’t it? Why is this nut so tough to crack? Cost reductions and improved client experience benefits look very small when put in perspective with the potential risks and costs associated with non-compliance. We’re talking about millions or billions of dollars in fines, the risk to lose a banking or insurer’s license, even shutting down the business entirely. The incremental gains and advantages of digital and shared KYC do not yet appear to offset these risks. Every few months we read about a government fining a Financial Institution for facilitating illegal activities, a shared utility or international business losing its clients personal information and payment details to hackers. Surely this is not a good industry backdrop to encourage digital KYC! What about 3D printers, what’s the link? As a consumer, I find that home 3D printers are overpriced gadgets with little practical purpose. As a B2B professional working with the largest Supply Chains in the world, the potential opportunity just blows my mind. Analysts agree that most global Supply Chains will be affected, shifting current patterns of commerce and logistics to a complete transformation over the next few decades. The biggest shift will happen around companies focusing on the production of Intellectual Property, delivered in the shape of Digital Assets – such as the files containing the 3D model and assembly specifications for their products. Other companies will focus on the physical production of commercial items, based on those Digital Assets. Analysts agree that most of this world will never be exposed to consumers, just like the world of global logistic is today. The disruption: Digital Assets and Digital Identity If you download music, movies or games regularly, (legally of course) then you probably know about Digital Rights Management (DRM). This early 2000s technology somewhat enabled contents producers and commercial online sharing platforms to ensure you are the only person able to play a track, or rent a movie for a certain period of time. 3D printers bring a new, bigger compelling event for such DRMs, the opportunity to control who can print a product, how many copies, for how long, with verified raw materials and on certified printing equipment. There are typically two facets for this technology: the Digital Asset itself (the 3D design combined with printing requirements and authorised users), and the Digital Identity (the certified, authenticated businesses and users). You see where this is going now… Digital Identity management will spread fast and wide, surfing the 3D printing revolution both for B2B and consumer markets. Digital Assets owners and producers will have an enormous stake and KYC shared utilities will probably continue to experiment and grow over the next couple of years, with more and more “use cases” coming into the frame. I don’t believe that shared utilities for a single industry will gather enough critical mass. Payments and Cash Management itself is already changing, with the introduction of PSD2 rules in Europe, the rise of Blockchain technology and distributed payment ledgers. If we look broadly, banking users (business or consumers) also begin to require a unique Digital Identity for other aspects of their life. Combining innovation with regulation over the next five years is going to be key and the winner will likely manage to combine Digital Identity across several industries and markets, similar to the IT Certificates Authorities (CAs) that spread across all industries since the early 2000s.

Read More

Wearables, Big Data, and Analytics in Healthcare

As wearable technology – including smartwatches, fitness trackers, and even clothing and shoes with integrated sensors – moves into the mainstream, healthcare organizations are exploring ways to use these devices to simplify, transform and accelerate patient-centric care. Their goals include boosting people’s health, improving patient outcomes, streamlining manual processes and opening new avenues for medical research and epidemiology. Analytics, data visualization and reporting are central to those efforts. Transforming Data into Insight and Action Wearables today can monitor and gather wearers’ activity level, heart rate, and other vital signs; reward wearers for healthy activities and habits; and alert the wearer and others, such as doctors, emergency responders and family members when problems arise. “Wearable health technology brings three distinctly beneficial trends to the table – connected information, community, and gamification,” writes Vala Afshar on the Huffington Post. “By harnessing this trifecta, healthcare leaders have new ways to build engagement and create accurate, far-reaching views of both personal and population health.” Wearables are both producers of data (collecting and transmitting wearers’ data) and consumers of data, receiving and displaying information about the wearer’s well-being and progress. Wearables are textbook generators of big data, with high velocity, volume and variety. And as in any big data scenario, transforming that data into insight and action requires a powerful, scalable analytics, data visualization and reporting platform. Wearables in healthcare share many characteristics with the networks of sensors in Internet of Things (IoT) applications. But healthcare adds additional complexities and wrinkles, particularly regarding security. With IoT, everyone agrees that security is important, but the rules and standards vary and are subject to debate. However, when individuals’ personal health data is in the mix, more (and more complicated) laws, security regulations and privacy concerns kick in. “A person’s health information is particularly sensitive,” writes Victoria Horderen in the Chronicle of Data Protection. “[B]oth in a legal sense (because health information is categorized as sensitive under EU data protection law) but also in an obviously everyday sense – people feel that their health information (in most but not all circumstances) is private.” Horderen writes specifically about the EU Data Protection Regulation, but the points she makes apply globally. The takeaway, I think, is that a platform supporting a wearable initiative in healthcare requires a robust, proven security foundation. Many Use Cases With a flexible big data platform supporting wearables, many healthcare use cases arise. Most of these are possible with today’s technology, while others could be on the horizon using future generations of devices. Some use cases include: A person under observation for heart disease can use a wearable to monitor his or her heart rate 24/7, not just while at the doctor’s office. The wearable enables collection of both historical and point-in-time data, and the platform enables in-depth analysis of the data. Alerts presented on a smartwatch can provide customized encouragement for good behavior (such as walking or stair climbing) and positive lifestyle choices (such as getting enough sleep). Such uses are ripe for gamification; if the wearer walks a certain number of steps (customized for the individual) rewards are unlocked. People are more likely to embrace a wearable if it provides an element of fun and positive feedback. Data from large numbers of wearers can be anonymized and aggregated to perform epidemiological studies. Data can be segmented by geography, activity level, and demographics if wearers choose to opt in. A wearable paired with a GPS-enabled smartphone can transmit coordinates and pertinent data to first responders in case of an emergency and alert family members of the wearer’s status. A surgeon wearing smart glasses can monitor patient vital signs and other medical equipment in real time during an operation without turning away from the patient. Think Small, Think Big As these use cases indicate, a platform for wearables in healthcare needs to operate on a micro level, sending customized, personalized alerts, recommendations and actions to individuals based on their own data. But a platform should also enable macro-level analysis of vast quantities of data to spot trends and identify correlations within large populations. The ability to analyze data on a large scale not only holds promise for medical research, but it also improves the wearable’s value to the individual user: An intelligent platform with access to individual and aggregate data can, for example, tell the difference between an heart rate spike due to exercise – a good thing to be encouraged – and a cardiac episode requiring attention and intervention on a case-by-case basis, not just a pre-set threshold. One last bit of good news for healthcare providers who want to embrace wearables: Doctors are more trusted than any other group with consumers’ personal data. According to research by Timothy Morey, Theo Forbath and Allison Schoop and published in the May 2015 issue of Harvard Business Review, 87 percent of consumers find primary care doctors “trustworthy” or “completely trustworthy” with their personal data. That percentage is greater than credit card companies (85 percent), e-commerce firms (80 percent), and consumer electronics firms (77 percent), and much higher than social media firms (56 percent). As wearable use grows, that healthy goodwill is worth building on. Smartwatch image by Robert Scoble

Read More

To Keep or Not to Keep: A Records Management Question

With 90% of the world’s data generated over the last two years and enterprise information growing at an exponential rate, the ability to effectively manage and govern the lifecycle of important electronic business information is more important than ever. Recently, OpenText CMO Adam Howatson sat down with Tracy Caughell, Director of Product Management, to discuss worldwide records management regulations, the consequences of non-compliance, and the cost benefit to organizations who embrace records management solutions. According to Caughell, one of the benefits of OpenText Records Management is the ability to identify when records can be disposed of. “We all know that we need to get rid of stuff,” she says. ” Our strength is allowing [customers] to do it in a way that is defensible, a way that they can prove they did what they were supposed to do, when they were supposed to do it, with minimal disruption to their daily activities.” Watch the video below to learn more, or click here. From capturing, to classifying, to managing information, having an enterprise-wide records management strategy can help organizations to comply with laws, external regulations and internal policies. By providing a structured and transparent way to maintain records from creation through to eventual disposition, Records Management can help to enhance corporate accountability, ensure regulatory compliance, and make it easier to find the information you need. More Information: Read how Sprint uses OpenText Records Management to take control of their records and documents. Learn more about OpenText Records Management solution. photo courtesy of Marcin Wichary

Read More

Information Security in the Digital Age [Podcast]

This is the first of what we hope to be many podcasts in which we explore the technology and culture of Enterprise Information Management (EIM). We’re going to share stories about how OpenText is delivering world class technology and improving our Customer Experience on a daily basis. In this installment, we hope to give you a better understanding of the current cyber security climate, show you what we’re doing to keep your data secure and protect your privacy, and tell you how you can protect yourself online. Our discussion on information security has been recorded as a podcast! If you’d like to listen but don’t see the player above click here. If you don’t want to listen to the podcast, we’ve transcribed it for you below: … The unknown unknown… … If it was three in the morning and there was a bunch of guys standing down a poorly lit alley, would you walk down there by yourself? Probably not. Yet on the Internet, we do that continuously—we walk down that street—and then we’re shocked when negative things happen… … People have an expectation that once they put a lock on their door they’re secure. And that might be the case in their home. But electronically it’s not quite so simple… Are we safe online? Perhaps a better question is whether our information is safe online. 2014 was a banner year for information, data—what we now call cyber—security, and if analyst reports can be any indication, security professionals are on high alert in 2015. International governing bodies have also placed an urgency on better understanding cyber security risks and putting in place strategies to ensure stable telecommunications and safeguard information. There has also been growing concern around data privacy. Though security and privacy work hand-in- hand and it’s difficult to have data privacy without security, there is a difference between the two terms. Security involves the confidentiality, availability and integrity of data. It’s about only collecting information that’s required, then keeping that information safe and destroying it when it’s no longer needed. On the other hand, privacy is about the appropriate use of data. To help us through the topic of cyber security, we talked to Greg Murray, VP of Information Security and Chief Information Security Officer at OpenText. The OpenText security team is made up of specialists around the world who provide operational response, risk assessments and compliance. They also brief executive leadership regularly, and keep development teams abreast of pertinent security information. More importantly, Greg and his team work with our customers to ensure their unique security needs are covered end-to-end. “It starts early in the process,” says Greg. “It starts in the presales cycle where we try to understand the risks that [our customers] are trying to manage in their organization. We find out how they are applying security against that, and then that becomes contractual obligation that we make sure is clearly stated in our agreement with the customer. From there, it goes into our operations center—or risk center, depending on what we’re looking at—and we ensure that whatever our obligations, we’re on top of them and following the different verticals and industries.” Again, 2014 was a big year for cyber security in the news (I think we all remember the stories of not too long ago). But while news agencies focused on the scope and possible future threats, Greg learned something else: “I think if we look at media, one probably would not have argued until last year that media was a high threat area compared to something like aerospace defense. That has changed. Clearly that has changed. As a result, customers come back and say, ‘Hey, our environment has changed. What can you do to help us with that?’” “What a financial institution requires is very different than what a manufacturing provider requires or a pharmaceutical organization. Some of that, as a provider to these organizations and customers, we can carry for them on their behalf. In other cases they must carry it themselves. A lot of the discussions that we have with customers are in regards to ‘Where’s that line?’” “At the end of the day, there’s a collaboration. It’s not all on the customer, it’s not all on OpenText. We have to work together to be able to prove compliance and prove security across the environment.” Regardless of the size, industry or location of an organization, security needs to be a top priority. This concept isn’t a new one. As Greg told Adam Howatson, OpenText CMO in a recent Tech Talk interview, information security hasn’t evolved that much over the last 50 years (view the discussion on YouTube). Greg’s answer may surprise, but after some digging I learned that back in 1998, the Russian Federation brought the issue of information security to the UN’s attention by suggesting that telecommunications were beginning to be used for purposes “inconsistent with the objectives of maintaining international stability and security.” Since then, the UN has been trying to increase transparency, predictability and cooperation among the nations of the world in an effort to police the Internet and private networks. Additionally, if you have seen the Alan Turing biopic The Imitation Game, you know that people have been trying to encrypt and decipher messages since the 1940s and probably even earlier. Today, the lack of physical borders online has certainly complicated things, but the information security game remains the same, and cooperation among allies remains the key. “Are we all contributing together?” Greg asks. “If we’re all working together—just like Neighborhood Watch—we need that same neighborhood community watch on the internet. If you see stuff that doesn’t look right, you should probably report it.” The bad guys are organized and we need to be organized as well. The more we share information and the more we work together… Particularly at OpenText, we have a lot of customer outreach programs and security work where we work hand-in-hand with customer security teams. By doing that, we improve not only our security, but we improve security across the industry.” Recently I attended a talk given by Dr. Ann Cavoukian, former Ontario Privacy Commissioner and Executive Director at the Privacy and Big Data Institute at Ryerson University in Toronto. In it, she said that “privacy cannot be assured solely by compliance with regulatory frameworks; rather, privacy assurance must ideally become an organization’s default mode of operation.” She said that privacy—which again, involves the appropriate use of information—must be at the core of IT systems, accountable business practices and in the physical design and networked infrastructure. Privacy needs to be built into the very design of a business. And I think it’s evident from what Greg says about security, and the way OpenText designs its software with the users’ needs in mind, that our customers’ privacy and security is an essential part of what we offer. “We have a tremendous number of technical controls that are in place throughout all of our systems. For us, though, it starts on the drawing board. That’s when we start thinking about security.” “As soon as Product Management comes up with a new idea, we sit down with them to understand what they’re trying to achieve for the customer and how we’re going to secure it. So that by the time somebody’s uploading that document, it’s already gone through design, engineering, regression testing analysis, security penetration testing.” “One of the other things we do is called threat modelling. Typically we look at the different types of solutions—whether they’re file transfer or transactional, for example—and we look across the industry to see who has been breached and how. We then specifically include that in all of our security and regression testing.” You don’t need to look further than the OpenText Cloud Bill of Rights for proof in our dedication to information security and privacy. In it, we guarantee for our cloud customers the following: You own your content We will not lose your data We will not spy on your data We will not sell your data We will not withhold your data You locate your data where you want it Not everyone is up front with their data privacy policy, but with people becoming more aware of information security and privacy concerns, organizations are going to find themselves facing serious consequences if they do not make the appropriate changes to internal processes and policy. Data security doesn’t lie solely in the hands of cloud vendors or software developers, however. We asked Greg what users and IT administrators can do to protect themselves, and he said it comes down to three things: “One is change your passwords regularly. I know it sounds kind of foolish, but in this day and age if you can use two-factor or multi-factor authentication that does make a big difference.” “The second thing you can do is make sure your systems are patched. 95% of breaches happen because systems aren’t patched. When people ask ‘What’s the sexy side of security?’, it’s not patching. But it works. And it’s not that expensive—it’s typically included free from most vendors.” “The third thing is ‘think before you click.’ If you don’t know who it is or you don’t know what it is… Curiosity kills the cat and curiosity infects computers.” We hope you enjoyed our discussion on information privacy and cyber security. If you’d like to know more about the topics discussed today, visit opencanada.org, privacybydesign.com and of course Opentext.com. We also encourage you to learn more about security regulations and compliance by visiting the CCIRC and FS-ISAC websites.  

Read More

Business Process: The Future of ECM

For years, enterprise content management (ECM) solutions were adopted primarily for two main use cases. The first was to achieve compliance, and many early adopters of ECM continue to successfully use it to address various regulatory requirements. Compliance provided functionality for records management, archiving, and information governance. A while back I wrote a blog post titled What Features Ensure Compliance? that elaborates on the functionality required for compliance use cases. The second use case was around team effectiveness with functionality such as collaboration, document sharing, and social capabilities. Collaboration is subject to frequent changes in direction as every new technology promises an easier and more compelling user experience—from mobility and social software to file sync-and-share. The frequent feature churn in the collaborative use cases doesn’t go well with the compliance requirements that often need the system to remain unchanged for several years (validated environments, anyone?). ROI and Dependency on the User Not only were the two primary use cases not really well aligned in their feature requirements, they had two additional challenges. Neither use case provides a very strong ROI. Sure, we marketers always calculate the savings in storage and government fines that compliance solutions help you avoid. But let’s face it: preventing penalties is not exactly a hard ROI and storage is cheap (or at least everybody thinks it is). The collaborative use cases are even worse—measuring the ROI here is fuzzy at best and often impossible. The second challenge was the dependency on the users to do the right thing. For the compliance use cases, users were expected to diligently file their documents, weed out their inboxes, type in the metadata, and apply the right retention policies. Obviously, users are not very consistent at it, even if you try to force them. In the case of collaboration, users were expected to share their documents openly with others, comment in a productive way, and stay away from email and all the other collaboration tools around them. As it turns out, this type of behavior very much depends on the culture of the team—it works for some, but it will never work for others. The adoption of any collaboration solution is therefore usually very tribal. So, is there any hope for ECM? Can we get an ROI and get employees to use it without someone watching over their shoulder? ECM: Part of the Process As it turns out, there is a third type of use case emerging. It is the use of ECM as part of a business process. Business processes are something people already do—we don’t have to force anyone. That’s what companies and working in them is all about: everything we do is part of a business process. Business processes are also important, relevant, and very measurable. There is an ROI behind every business process. Every instance of a business process includes the context, which can be used to populate the metadata and to select the right policy automatically. Business processes can handle the automation of content management and don’t have to rely on the end user to do it. But business processes don’t live in ECM. Sure, the process artifacts usually reside in a content repository, but it would be a stretch to claim that the entire business process happens in an ECM application. Nor does it live in the BPM application, even if that application may be the primary application for some users. In fact, there is usually a master application from the structured data world that rules the business process: enterprise resource planning (ERP), customer relationship management (CRM), product lifecycle management (PLM), supply chain management (SCM), etc. That’s why it is important for ECM to connect with the master applications through the business process. This is not just a simple way to link data sets or to hand over data from one system to another. Using modern, REST-based technology, it is possible to achieve integration that goes much deeper and involves users, roles, permissions, classifications, and of course the user experience. Deal with Content Chaos ECM addresses some very important problems that every organization has to deal with. Given the volume and relentless growth of content in every enterprise, it has to be managed. Yet ECM struggled to be adopted widely because of lack of tangible ROI and a difficulty to attract end users. Tying ECM to a business process through a master application addresses these challenges. It may not solve every problem with content in the enterprise and there will still be content outside of any business process, but it will go a long way to dealing with what AIIM calls “Content Chaos”. Click below to view my SlideShare presentation from the AIIM Conference 2015 on the challenges with traditional approaches to ECM and a solution provided by tying ECM to business processes: Business Process – the Future of ECM from Lubor Ptacek  

Read More

Digital Disruption: The Forces of Data Driven Smart Apps [Part 4]

Editor’s note: Shaku Atre (at right, above, at our Data Driven Summit last December) is the founder and managing partner of the Atre Group, Inc. of New York City, NY and Santa Cruz, California. (Read more about Atre here.) Atre has written a thorough and compelling treatise on the disruptive power of mobile apps, and supported her analysis and conclusions with templates and case studies.  We are privileged to present her analysis here in four parts. In Part 1, she made the case for mobile apps and laid out  some of the forces behind digital disruption. In Part 2 Atre described two more disruptive forces, and in Part 3 she shared two templates for creating mobile app case studies. Today, the series concludes with mobile app case studies in financial services, telecommunications, car rental, and pharmaceutical industries. At the end of this post we share a link to download the entire series.  —– Digital Disruption: The Forces of Data Driven Smart Apps [Part 4 of 4] Copyright by Atre Group, Inc.   Case Study 1: Financial Services Let us consider a basic consumer and small business bank as a publisher of a smart app for “Business Advantage Checking with Mobile Banking” Figure 3 Who are the primary beneficiaries? a. The bank’s customers Who are the secondary beneficiaries? a. The banks’ customer base grows with very good referrals. The bank   itself will be the beneficiary.b. Banks make money by using customers’ money in loaning the money at higher rates for loans given out as compared to what the banks pay their customers. c. If a bank has a good amount of money in reserves, the bank can get a better interest rate from the Federal Reserve Bank. d. Telecommunications companies that supply Internet capabilities to the banks e. Credit score companies which keep track of credit worthiness of the individuals f. Government which can put a hold on the accounts if taxes are   not paid g. Mortgage banks; credit card companies; employees: city, state and federal governments with electronic funds transfers from the checking account – any recipients of the EFT Examples of functionalities to be provided by the bank’s App: • The main intent of this app is to have your own bank in your back pocket. • Which devices are used for online banking? (most frequently used devices are as follows – and we will have many more that we can’t even think of): a. iPhone (iOS)b. iPad (iOS) c. Android Phone d. Windows Phone e. Blackberry f. iPad (iOS) g. Android Tablet h. Kindle Fire i. Other Devices • The main functions that are expected of a successful bank are: Login, account authentication, account overview on one screen– easy creation of your own banking dashboard, deposit, bill-pay, transfers, person to person payment, message center, customization, PFM (Personal Finance Management) with possible integration with an accounting software. • Your own banking dashboard addressing needs of different customer segments: Total control of the end-user, customized dashboard with drag and drop capability of widgets, widget catalog, store personal likes and dislikes to make the experience desirable. • Accounts and Transactions: a. Monitoring balances of all types of accounts at the bank: checking, savings, debit and credit cards, mortgage loans,    personal loansb. Transaction details with various filtering options, tools to help categorize and tag. c. Tracking of completed as well as pending transactions, ATM withdrawals and deposits, check deposits, cash deposits, online    deposits, warnings about when the deposited money will be available to withdraw, grouping of accounts, joint accounts and       setting limits how much each person can withdraw at once-        within what timeframe, adding accounts from other external banks • Deposits & Loans: a. Keep track of details of deposits and loansb. Verify on an ongoing basis credit card limits, credit card payments each month, loans and overdrafts, interest rates, c. Prioritize repayment schedules paying the highest interest rates’ loans first, etc. d. New loan application process simplification • Money Transfers & Person To Person Payments: a. Money transfers of multiple types such as P2P (Person to Person) Transfers, as well as Account to Account Transfers (A2A)b. Domestic Transfers and International Transfers c. Scheduled and Recurring payments are supported d. Connection with social address books so that friends can transfer money using email addresses or mobile phones e. Transfers that are pending or scheduled are watched • Bill Pay: a. Vendors should be able to email bills to a secure message centerb. Optical Character Recognition and mobile scanning capabilities for paper invoices for previously verified vendors for quick      payments • Split & Share: a. Receiving invoices together and splitting them by automatic debits of accounts among already declared friendsb. Social address books to be integrated with the banking transactions • Alerts: a. If the balance in the account goes down to a certain amount an alert message is sent to the account holder’s smartphoneb. An alert about a bounced check and charge taken out of the checking account • Which services would you like to search for? a. ATMsb. Banking centers c. 24-Hour ATMs d. Banking centers open Saturdays e. Drive-Up ATMs near my current location Which parts of Big Data can be stored and used?   Figure 4 • Customers’ Data Storage: • Primary Beneficiaries: direct deposits, direct debit, EFTs, account to account & person to person transfers, balance transfer, account management with QuickBooks Integration • Customers’ (Primary & Secondary) Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage: • Data such as referrals received, Credit scores, accounting integration • External Data Storage: • Marketing and Industry big data such as: FIX, SWIFT, competitive data; data such as: Bank Reserves, Troubled Banks, Prime Rate Changes are streamed   Case Study 2: Telecommunications Let us consider a Mobile Telecommunications Service Provider as a publisher of a smart app for Smart Telephone Service Provider   Figure 5 Who are the primary beneficiaries? • Consumers, small and large business owners Who are the secondary beneficiaries? • Telephone manufacturers• Independent telephone service providers • Insurance companies insuring hardware • All types of industries that have mushroomed with mobile equipment and services Examples of functionalities to be provided by the telecom’s App: • View your usage• Purchased Extras • Manage your plan & Extras • View recent transactions • Top Up – Credit Card • Top Up – Prepaid Voucher • Pay your bill • View Activity – For Prepaid • Alerts & Notifications – For Prepaid Novel Apps for Telecom: • Public health: e. g. Ebola Outbreak: • Connect with Toll Free Numbers necessary human resources needed to help save lives at a massive scale. Telecom and Internet are the two most important ingredients. • Interactive voice response currency converter App: • Providing up-to-date exchange rate information • Finding missing children           Which parts of Big Data can be stored and used?   Figure 6 • Customers’ Data Storage: • Customer’s usage, recent transactions, Voice usage, data usage, bill payment, top-up credit card • Customers’ Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage • Telephone manufacturers’ special discounts, Insurance companies’ special offers for lost telephones, special deals • External Data Storage • Telephones’ sales statistics, regulatory commissions data, marketing campaigns by various telephone service companies, FTC rules and regulations   Case Study 3: Car Rental Let us consider a car rental company as a publisher of a smart app for “Automobile Traffic Management App – ATMA” Figure 7 Primary Beneficiaries of the App: Drivers, accompanying passengers Secondary beneficiaries of the App and what are their benefits? 1. Police Department • Severity of the accidents’ info and any actions necessary based on that info 2. Hospitals • Which types of ambulances should be sent and how big should they be?• Which specialty of physicians should be ready to help the injured people? • Which equipment should be kept ready? 3. Property & Casualty Insurance Companies • Which roads are hazardous and cause property damage?• Which drivers are “high risk”? Examples of functionalities to be provided by ATMA: • ŸTraffic Maps to show the travel route taken by the driver (a visualization)• Alerts (Deployment of mobile technology – Speaking App – the driver can set the timer about how often the app should be speaking, e.g. every five minutes or every ten minutes) • Traffic Congestion: i. Maps – a visualization that integrates a map with easy to understand visual icons such as bumper to bumper traffic, ambulances, etc. ii. What type of interaction could be provided to avoid hazardous situations iii. The possible reasons for the backup 1. Construction 2. Accident 3. A specific event 4. Inclement weather iv. How long is the backup as far as the time is concernedv. What is the average speedvi. How long is the expected delay to reach the destination vii. What is the average speed? viii. Should someone be informed about the delay? (This information could be set up before starting the journey. If the delay is longer than fifteen minutes the consumer should be informed with a text message.) Which parts of Big Data can be stored and used? Figure 8 • Customers’ Data Storage: Driver’s information about driving records, DMV records, car information, starting and ending locations, etc. • Customers’ Transactional Data: Customers’ invoices & archived internal data • Potential Customers’ Data: Police reports of various accidents, hospitals’ reports, insurance claims, state & county roads renewal plans, new construction plans • External Data Storage: State Highway Patrol Data, Road Sensors Data, Maps, Construction data updates, previous accidents data in each part of the traffic area Which parts of Big Data could be used? i. State Highway Patrol Dataii. Road Sensors for accurate readings iii. Maps should be zoomable, clickable and should provide accurate speeds for each exit along the highway. iv. Drivers should be able to report by “speaking” in the car, keeping both hands on the steering wheel, about any incidents on the roads and that “voice data” could be a part of the “Big Data” for traffic information What are novel ways for decision making by drivers? v. Getting alerts to save timeo By driving routes with less traffic vi. Avoiding hazardous situations vii. Recording the problem areas in the database stored in the automobile’s database memory and evaluating the database records before starting any trip which is going to be longer than an hour viii. The app informing the parties at the destinations so that they know that the driver is delayed because of such and such ix. If a restaurant lunch or dinner is set at a certain time requesting scheduled time of reservation and estimated time of delay and another 15 minutes x. Police reports of various accidents, hospitals’ reports, insurance claims, state & county roads renewal plans, new construction plans   Case Study 4: Pharma Let us consider a pharmaceutical company as a publisher of an app and people with ailments as primary customers and pharmacies, Physicians, Hospitals, Clinics, Medical Insurance Companies, Medicare, and Medicaid as secondary customers. Figure 9 Examples of functionalities provided by the Pharma Apps: • Diary based apps: • Assisting the patients with the day, the time, and the dose taken or to be taken. Medication passport (Astra Zeneca) with names of the medications, doses and timings of the drugs.• Glucose monitoring apps for patients afflicted by diabetes. • Helping patients to track test results and appointments. (Eli Lilly – MyNet Manager) • Contraceptive reminder My iPill by Bayer • Procedures: • Showing how to administer certain procedures e.g. self-administered insulin injection to the diabetes patients. (Eli Lilly) • Educational: • Foods that reduce the risk of a diabetes on one side and the ones that exasperate on the other side (Boehringer Ingelheim’s Complications combat) • Alerts: • Sending alerts to family members when someone doesn’t take their medication– ad produces charts showing adherence to treatment regimens. (Johnson & Johnson’s subsidiary – Janssen – about half of the patients miss medications) • Weight Loss: Keeping track of the weight, food intake (Noom Weight Loss Coach) Which parts of Big Data can be stored and used? < Figure 10 • Primary Customers’ (Patients’) Data Storage: • Patients report which drugs they take, related improvement in ailments, Undesired reactions experienced by the patients, severity of the reactions • Customers’ (Primary & Secondary) Transactional Data: • Customers’ invoices & archived internal data • Potential Customers’ Data Storage: • Pharmacies record sales of drugs, the most frequently sold to the least frequently sold, which physicians recommend which drugs • External Data Storage: Data from National Health Services, Global Registry of Coronary Events (GRACE), Center for Disease Control (CDC) Prepare and act to handle the Digital Disruption which is rumbling around the corner. —– These four blog posts by Shaku Atre are available as PDF downloads here: Parts 1 and 2. Parts 3 and 4.  References for Parts 3 and 4: Mobile Application Development: http://en.wikipedia.org/wiki/Mobile_application_development Telecom: http://help.spark.co.nz/app/answers/detail/a_id/33187/~/smartphone-app http://tadsummit.com/2013/ http://blog.tadsummit.com/ Pharma: http://www.fiercebiotechit.com/special-reports/20-big-pharma-and-biotech-mobile-apps-2013?page=0,0 Some of Iodine’s competitors: http://www.webmd.com/drugs/index-drugs.aspx http://www.drugs.com/drug_information.html http://www.mayoclinic.org/drugs-supplements One difference between Iodine and its competitors is Iodine’s data-driven approach vs. other competitive websites’ content-driven approach. Here is the write up about Iodine in The New York Times dated Wednesday, September 24, 2014: http://www.nytimes.com/2014/09/24/technology/to-gather-drug-information-a-health-start-up-turns-to-consumers.html?module=Search&mabReward=relbias%3Ar%2C%7B%221%22%3A%22RI%3A6%22%7D Embedded Analytics: http://www.slideshare.net/JessicaSprinkel/the-complete-guide-to-embedded-analytics New York City Medallion: http://www.nytimes.com/2014/11/28/upshot/under-pressure-from-uber-taxi-medallion-prices-are-plummeting.html?module=Search&mabReward=relbias%3Ar%2C%7B%222%22%3A%22RI%3A17%22%7D&_r=0&abt=0002&abg=1  

Read More

Health Care Organizations’ Email Security Isn’t Making the Grade

So, it looks like a lot of health care organizations are flunking email security. According to the “state of email trust” survey cited in a recent Fortune Magazine article , health care organizations “severely lag” when it comes to securing email communications. In fact, the article states that an email, “purportedly sent from a typical health insurance company is, for instance, four times likelier to be fraudulent than an email that claims to be from a social media company.” A spokesperson from the surveying organization went on to state that “The poor folks in health care have traditionally not had much digital interaction. They’re the ones furthest behind by a country mile.” Considering the strict security compliance regulations in the space, this is disconcerting for the health care industry. The article went on to explain that only one of the 13 health care companies surveyed surpassed the ‘vulnerable’ category when it came to implementing three standard secure email protocols: Sender Policy Framework, or SPF, which checks emails against a list of authorized senders DomainKeys Identified Mail, or DKIM, which verifies the authenticity of a sender through encrypted digital signatures   Domain-based Message Authentication, Reporting, and Conformance, or DMARC, which checks emails against a published record on a company’s servers, notifies the company of any potentially spoofed emails, and rejects suspicious emails as spam Fortunately, solutions like OpenText Secure Mail support these security protocols while tracking, encrypting and controlling the distribution of your secure email messages. One key feature of Secure Mail that might help some of these “vulnerable” health care companies is Data Leak Prevention (DLP). This capability limits access and transmission of sensitive information based on specific security policies. Features like this position Secure Mail as a strategic business tool for organizations looking to maintain the confidentiality of protected health care information.

Read More

Healthcare Data Breach Hits Top Insurer

It looks like the report was accurate. I recently blogged about a Healthcare Informatics article entitled “ Report: Healthcare Data Breaches Expected to Increase in 2015 ”. The article discussed a report stating how Personal Healthcare Information’s (PHI) continued shift towards digital formats will heighten exposure to data breaches. Unfortunately the report might be right; according to an article in Health Data Management health insurer Premera Blue Cross was just hit by a “sophisticated cyberattack.” Premera said hackers may have accessed vital insurer member and applicant information such as names, dates of birth, email addresses, social security numbers and bank account information. Premera is working feverishly to address this “giant hack” however possible. It goes without saying data security is an extremely important compliance issue for the healthcare sector (HIPAA, anyone?). This news only amplifies the fact healthcare organizations must consider successful implementation, as well as consistent assessment, of electronic data security policies a non-negotiable. Depending on the organization, PHI is shared in various ways – regardless of whether it’s by fax, email, or managed file transfer. Each tend to play key roles in the exchange of PHI. In most cases these modes of electronic data transmission have security features like message encryption in-transit and at-rest, Data Leak Prevention (DLP), specialized viewing privileges and much more – all of which drive the protection, integrity and security of electronic PHI. If anything, Premera’s experience should prompt healthcare organizations to vigilantly re-evaluate the quality of their security measures for protecting electronic PHI. To learn more about how OpenText Information Exchange products can help, please click here .

Read More