Compliance

Data Driven Digest for September 18: Money and Finance

This week marks the 133 anniversary of the opening of the Pacific Stock Exchange in San Francisco. The establishment was created to serve the interest of businesses that struck it rich mining for gold during the California Gold Rush. Nowadays, businesses mine for data hoping to strike it rich by analyzing that data for clues about how to best serve their customers, streamline their operations, or gain a competitive advantage. In honor of those financial pioneers, this week we offer three different visualizations of financial data. Eureka! U.S. Fiscal Responsibility   In 1789, the United States established its first loan to pay salaries of the existing and future presidents and the Congress. As our friend Katy French (@katyifrench) posted in Visual News this week, bean counters in Washington kept great records and even produced stunning visualizations to represent trends. The graphic above represents the Fiscal Chart of Debt and Expenditures by the U.S. Government between 1789 and 1870. Note the spikes in military spending during the War of 1812 and Civil War as well as the first major accumulation of debt in 1861.   Euro Spending How do Europeans spend their paychecks? That was the premise of a recent data plot developed by The Economist (@TheEconomist). Based on data sets from Eurostat entitled Final consumption expenditure of households by consumption purpose, The Economist found life in the Euro zone is quite diverse. Living in Lithuania? Your budget is dominated by food and clothes. Lithuanians also spend more per capita on alcohol and tobacco than the rest of Europe. Meeting in Malta? Forget about eating at home. Nearly 20 percent of Maltese spending goes toward restaurants and hotels. Spaniards spend the least on their transportation. Germans spend more on their furnishings than their E.U. neighbors   World Population Based on Income Our friends over at Pew Research Center (@PewResearch) have come up with an interactive visualization based around the paradigms of income and how it relates to world population. For example, the map above shows the density of people living under what they term as a middle income. By middle income, that means your daily wages are between $10.01 and $20. According to the map, 13 percent of the 7+ billion people in the world are middle income. The map has a second option that reveals the percentage point change in that population between 2000 and 2011. It’s a fascinating study on both financial statistics as well as data maps. The income groups are defined as follows: The poor live on $2 or less daily, low income on $2.01-10, middle-income on $10.01-20, upper-middle income on $20.01-50, and high income on more than $50; figures expressed in 2011 purchasing power parities in 2011 prices.

Read More

Digital Engagement: A New Business Requirement

Digital engagement isn’t an option anymore, it’s a requirement. Today’s consumers are savvy and fickle, and companies must work to earn their loyalty. They’re demanding more from the brands they love, and their tolerance for anything but a seamless, engaging, and compelling experience is flagging. In a digital world, organizations must digitize their customer journeys, from initial interest through to purchase and follow-on service or support. The best way to do this is to shift to a digital marketing strategy. One that creates consistent and compelling customer experiences at every touchpoint through omni-channel delivery, responsive design, and targeted communications and information. Digital technologies have introduced new customer touchpoints and increased opportunities to engage. Since consumers often use more than one channel to interact with a brand (in some instances they use five or six), delivering uniform and relevant messages across all channels is crucial for return on marketing investments and customer satisfaction. Omni-channel focuses on meeting consumer needs by pulling together programs to provide a cohesive brand experience across channels, platforms, and devices. To borrow from Bruce Lee, digital design should “be like water”. You put water into a cup, it becomes the cup. You put water into a bottle, it becomes the bottle. You put water into a teapot, it becomes the teapot. The same holds true for digital experiences. The transition from desktop to device to point-of-sale should be fluid. This is achieved through responsive design. Customers don’t see individual devices or channels; they look for a consistent and familiar brand experience that delivers relevant content. Nirvana on the customer journey is realized when a company anticipates the needs and wants of a customer and serves up targeted and tailored content, products, or services, in the moment of need, wherever the customer is. Organizations that can predict customer behavior have a better chance at fulfilling consumer needs. Analytics—or analyzing data collected across various touchpoints of the customer journey (transactions, interactions, social media sites, and devices) helps organizations discover valuable customer insights so that they can offer more personalized and satisfying experiences. The most effective way to target different audiences is to use messages that focus on products and services with the greatest appeal for each segment. Using dynamically generated customer communications, organizations can create and automate their marketing campaigns. When correspondence is part of a digitized process, end results are gains in efficiency and the ability to create superior customer experiences. As one of the foundational suites for Enterprise Information Management (EIM), Customer Experience Management (CEM) aims to create a richer, more interactive online experience across multiple channels without sacrificing requirements for compliance and information governance. CEM brings together all of the technologies required to re-architect back-office systems, consolidate customer data, and create digitized front-end experiences. Digital engagement starts inside the firewall and extends outside the enterprise and all along the supply chain. In the next post in this series, I’ll explore how the supply chain is being disrupted and how enterprises can digitize key processes for greater collaboration, information exchange, and business agility. Find out how you can capitalize on digital disruption. To learn more, read my book, Digital: Disrupt or Die.

Read More

9 Information Governance Mistakes to Avoid

Reliable data preservation, eased regulatory compliance, streamlined eDiscovery processes, and business continuity. With such great benefits, it makes sense that you want to get started with an archiving solution right away. Don’t dive in yet, though.. Information governance (IG) will change your way of doing business, but defining how it will be rolled out can prove challenging. Many organizations may have good intentions, but they can end up doing more harm than good. Instead of diving in head first, it’s often best to first test the information governance waters and properly ready yourself. Below are 9 of the most common mistakes to avoid when rolling out an IG program: #1 Treating IG as a one-off project Many organizations make a big deal over the launch of an information governance solution. But what happens after the initial kick-off? They often forget that an IG solution will need to be maintained and supported long after the party’s over. #2 Ignoring data quality and relevancy One of the biggest benefits of a strong IG program is increased data quality and relevancy. Too many companies implement IG technology, but don’t spend the time to manage information that is Redundant, Obsolete, or Trivial (ROT). Part of IG is giving higher visibility to the critical business data and getting rid of the ROT. #3 Not deleting ROT While instant messages with colleagues over a particular legal matter can be crucial to litigation matters, those that painstakingly plan the upcoming corporate picnic are not. Organizations often make the costly mistake of saving all of their data. This fear-driven approach can cost organizations substantial amounts of money in storage, and can create a whole lot of clutter. Having a deletion strategy is an essential part of an IG program. #4 Deleting when you should be archiving While defensible deletion is an important part of information governance, so is archiving. Often times, as data volumes get maxed, organizations start to become more “choosy” about what they keep and can start to overlook things. Ensure that the IG solution is scalable and can accommodate growing volumes. #5 Lack of executive buy-in Any information governance program impacts an entire organization, making executive buy-in an essential component for success. By implementing below C-level, the solution may not have the support, budget, or visibility it truly needs to flourish. #6 Excluding experts You wouldn’t hire an electrician to fix your plumbing, so why cause such complications in the workplace? Designing and implementing an IG solution can be daunting, so leave it to the experts. Hire consultants who are well versed in your industry and understand the compliance issues you’re facing. Typically this is money well spent. #7 Making it about compliance only While information governance is partially focused on keeping an organization in regulatory and legal compliance, making it the main focus of your program can be limiting – and boring. Get users invested in the new solution by demonstrating how it affects them individually: show how sales can close more deals, marketing can deliver products to market quicker, and how legal can proactively respond to litigation. #8 Implement technology before policy This is where testing the IG waters is crucial. If you implement technology, but don’t have the proper policies or company culture in place then organizations can see valuable resources wasted. #9 Believing you’re too small for IG All IG solutions are not created equal, but organizations of all sizes can benefit from one. While enterprise-size companies have massive amounts of data that can be overwhelming to manage without the necessary technology, smaller companies have fewer resources to manage data. All it takes is one lawsuit or rogue employee to cause irreversible damage. Instead of viewing IG adoption as a race to the finish, try to envision the process as creating culture change across the company and properly prepare yourself for the journey.

Read More

ROI: Defining the Return on Investment for Information Governance Software

web optimization

Without hard revenues or growth statistics, it’s often challenging to define the ROI for implementing an information governance solution.  Organizations often struggle to quantify the results, leaving executives skeptical about the investment. An inherent challenge is the variety of different components that can comprise an overall information governance solution, and thus, the corresponding investment.Some of the key variables that determine your information governance expenses include: Project Management & Planning:  Who will manage the project?  Will all departments be giving input and feedback with the software solution? Number of Users: Typically the more users, the higher the cost. Programming:  Does your organization require any special features or usability requirements? Training – Training will incur additional expenses and staff time, and depend on the depth of necessary onboarding. Support – Costs related to support can vary greatly depending on whether support is 100 percent outsourced, 100 percent internal, or a combination of the two. Storage – Cost per GB may be a small initial investment, but expect data volumes and the associated cost for storage to grow. After the costs are defined, many executives want a clear picture of ROI.  The results from your information governance efforts can vary.  The ROI might actually be negative in some instances if you develop the wrong plan or implement the wrong software.  On the other hand, ROI from an information governance solution can be millions of dollars in either direct revenue from finding critical business documents, or in cost savings by avoiding damaging litigation. Below are seven of the leading places to look for information governance ROI: Implementation & Maintaining:  The upfront cost and hours spent during implementation plus the cost of on-going expenses such as hosting fees, and staffing give you a break even point for your ROI.  The lower these costs are the faster you’ll start seeing positive ROI. Storage Reduction:  The fastest way organizations start seeing ROI from their information governance efforts is an ability to stop using off-site storage, costly back-ups, and the ability to defensibly delete information. eDiscovery:  eDiscovery is a substantial financial burden, and one case can cost millions in discovery costs.  Implementing information governance can substantially reduce these costs by decreasing review times, helping you avoid legal expenses, and costly unfavorable outcomes. Productivity:  Employees spend countless hours a week trying to locate and re-create documents they are unable to find.  The time savings can have an immediate impact on ROI and effect top-line growth. Litigation: More organized and thorough data can result in higher litigation success rates. IT Resources: Information governance practices can free up the time and cost of using internal resources for archiving, eDiscovery, and overall information management. Security: Information governance can reduce additional spending on security.  It can also reduce data breaches and theft. With so many variables to consider, it’s a difficult market for buyers to determine the best solution.  Everyone knows someone who made a bad investment in the wrong technology. Even so, an archive is not a place to save money.  The difference between a $50k and $100k investment in the right technology might seem substantial, but one might translate into hundreds of thousands if not millions of dollars actualized over time.  Your information governance partner should help identify where spending needs to be made and where savings can be realized, so that ROI is not only clear, but substantial.

Read More

Kicking Off Information Governance – the First 30 Days

Everyone is now on board with the information governance (IG) initiative you’ve championed across the organization and it’s time to get started.  With data volumes growing by the minute, it may feel like you are already behind schedule, and trying to decide where to begin is daunting. We are here to tell you the truth – kicking off an IG initiative is not easy.  With a carefully planned roadmap however, you can strategically roll out the solution with much success. Below is a basic guide to help you map out your first month of implementation, including tips to bolster your success. Week One Select an information governance committee. This group will outline the goals and objectives for the initiative, which will further define the procedures and policies set in place that govern its ongoing progress.   The information governance committee can also help establish a timeframe for rollout and ongoing maintenance. TIP: Ensure that there are key stakeholders from each of the essential business functions on the team to not only represent their respective departments, but also champion the initiative within each group. Week Two Start with small wins.  Information governance is much too large an initiative to implement organization-wide from the start.  Depending on the size and scope of the project, develop a step-by-step plan for information governance rollout that addresses one department at a time, or one project at a time.  Ensure there is a week or two between each rollout to allow room for troubleshooting, before moving on to the next project.  A few small projects to consider as starting points include: Create a training program Begin migration and retirement of old systems Start identifying and implementing technology Create a roadmap TIP: Identify areas or projects that can quickly show results. Week Three Reinforce participation.  One of the most challenging aspects of information governance is ongoing company wide participation.  Building a culture of information governance and compliance is critical for success.  It’s not an overnight process, but it’s important to keep the project top of mind and continue building momentum for the solution so it doesn’t fall off everyone’s radar. TIP: Consider building in an incentive-based reward system that links performance and participation.  Rewards can simply be recognition in the form of highlighting successful users. Week Four Establish some early ROI.  ROI will be it’s own champion for the ongoing success of the project.  Reporting positive results from the beginning will get more and more people on board and not only ease the implementation, but also support your case if and when more budget/resources are needed. TIP: Some good places to look for early ROI include: reduced storage volumes, improved eDiscovery timeframes and increased visibility.

Read More

Why Information is the New Currency

We live in a digital world. A testament to this new reality is the growing value of digital content. People download songs, purchase movies online, exchange emails, and share personal information—all in the form of digital content. Information in its many new forms has become commoditized. In a digital world, information is the new currency. Will it replace the dollar, the Euro, the Yen? Not yet, but as information flows across networks, as it is exchanged and more metadata is collected, it grows in value. New businesses and whole industries are emerging to support the digitization of content. As industry leaders like Google and Facebook have demonstrated, opportunities to monetize information are abundant. Like money, data can be stolen. As information grows in value, so will the need to protect and manage it—and this will be increasingly mandated by governments and regulatory bodies. Many large companies (health care providers, governments, and banks, to name a few) are the gatekeepers of highly confidential, personal information. They are susceptible to information leaks. In a digital world, how will government and regulators monitor and protect the huge amounts of personal data stored in the Cloud?  As society becomes digital and the Internet propagates a faster pace of crime, organizations will need to focus on the development and enforcement of governance policies, standards, and systems to prevent identity theft and online fraud. The mass digitalization of products, services, processes, and overall business models will demand a disciplined approach to managing, governing, and innovating with information. Enter Enterprise Information Management, or EIM. EIM is a set of technologies and practices that maximize the value of information as it flows across networks, supply chains, and organizations. Its core technologies work together to create an end-to-end platform for sharing, collaboration, analysis, and decision-making, based on the effective management of information to harness its potential while mitigating risk through governance, compliance, and security. EIM delivers a long list of benefits for the enterprise, including reduced costs, increased transparency, improved security and compliance, optimized productivity and efficiency—but the overarching benefit that EIM gives to organizations is the ability to simplify their operations, transform their processes and information, and accelerate business and agility to innovate at the speed of digital.  In a digital world, information will play a fundamental role in empowering the enterprise. Digital leaders will differentiate their products and services based on a strategy that maximizes the potential of digital information. They will use EIM technologies to connect information for better performance, greater opportunity, and deeper insight into their customers. I’ll take a closer look at how competitive advantage is created through managing consumer-related information in the following post in this series, “Digital Engagement and the New Consumer”. Find out how you can capitalize on digital disruption.  To learn more, read my book, Digital: Disrupt or Die.

Read More

Digital-First Fridays: Information is the New Currency

We live in a digital world. A testament to this new reality is the growing value of digital content. People download songs, purchase movies online, exchange emails, and share personal information—all in the form of digital content. Information in its many new forms has become commoditized. In a digital world, information is the new currency. Will it replace the dollar, the Euro, the Yen? Not yet, but as information flows across networks, as it is exchanged and more metadata is collected, it grows in value. New businesses and whole industries are emerging to support the digitization of content. As industry leaders like Google and Facebook have demonstrated, opportunities to monetize information are abundant. Like money, data can be stolen. As information grows in value, so will the need to protect and manage it—and this will be increasingly mandated by governments and regulatory bodies. Many large companies (health care providers, governments, and banks, to name a few) are the gatekeepers of highly confidential, personal information. They are susceptible to information leaks. In a digital world, how will government and regulators monitor and protect the huge amounts of personal data stored in the Cloud? As society becomes digital and the Internet propagates a faster pace of crime, organizations will need to focus on the development and enforcement of governance policies, standards, and systems to prevent identity theft and online fraud. The mass digitalization of products, services, processes, and overall business models will demand a disciplined approach to managing, governing, and innovating with information. Enter Enterprise Information Management, or EIM. EIM is a set of technologies and practices that maximize the value of information as it flows across networks, supply chains, and organizations. Its core technologies work together to create an end-to-end platform for sharing, collaboration, analysis, and decision-making, based on the effective management of information to harness its potential while mitigating risk through governance, compliance, and security. EIM delivers a long list of benefits for the enterprise, including reduced costs, increased transparency, improved security and compliance, optimized productivity and efficiency—but the overarching benefit that EIM gives to organizations is the ability to simplify their operations, transform their processes and information, and accelerate business and agility to innovate at the speed of digital. In a digital world, information will play a fundamental role in empowering the enterprise. Digital leaders will differentiate their products and services based on a strategy that maximizes the potential of digital information. They will use EIM technologies to connect information for better performance, greater opportunity, and deeper insight into their customers. I’ll take a closer look at how competitive advantage is created through managing consumer-related information in the following post in this series, “Digital Engagement and the New Consumer”. Find out how you can capitalize on digital disruption. To learn more, read my book, Digital: Disrupt or Die.

Read More

5 Ways to Simplify eDiscovery

CX

Many organizations are failing at keeping up with their unstructured data. Legal and IT are finding themselves dealing with unmanaged data growth, often making eDiscovery a monumental task that eats up valuable resources. With legal and IT budgets being constricted, it can be hard to manage rising and unpredictable eDiscovery costs. Being proactive, rather than reactive is key. Below are five ways you can simplify the eDiscovery process and substantially reduce cost. #1 Avoid Legal Jargon Legal holds are meant to be acted upon by employees. In order to streamline the eDiscovery process, legal and compliance teams need to understand how employees talk and the issues related to their documents. Draft your legal hold notices in a way that everyone can understand so they know what to do, and fully understand expectations. #2 Anticipate Risk While you may be tempted to cross your fingers and simply hope no disputes arise, anticipating and planning for potential risks can prove to be very rewarding. Data and documents can be categorized based on potential risks, such as trademark disputes, with all corporate naming and branding documentation stored in a separate location. If and when litigation arises, your preparedness will pay off. #3 Early Case Assessment Once data is properly preserved and collected, it’s extremely important to condense the information down to a more manageable size. Actively culling the the data during document review will save you more time and money during the eDiscovery Process. #4 Integrate Solutions eDiscovery was formerly a “piecemeal” solution. Separate tools such as early case assessment and document review functioned as software packages which demanded the risky business of information transfer. Integrating all eDiscovery components under one platform is not only the safest route, but the simplest as well. #5 Hire Experts An eDiscovery solution provider can best advise and consultant clients through the assessment, selection and implementation processes. These experts can help you anticipate and plan for issues before they arise, and should essentially become an extension of your internal team. I think what it all boils down is being proactive. eDiscovery costs are substantially reduced when you’re readily prepared for an eDiscovery request by implementing the right tools and team. Asking for advice now and implementing technology might save you headaches and hundreds of thousands if not millions in eDiscovery costs and potential litigation disputes.

Read More

Compliance violations for faxing and Windows Server 2003 users

If your organization or users are still using Windows Server 2003 after July 14, 2015, be prepared for the consequences. Since Microsoft will end support for Windows Server 2003 this month, anyone still using Windows Server 2003 is at risk of a security and exposure breach. Malware and cyber threats can go undetected in unsupported operating systems, which alone is a huge risk for organizations. However, did you know that these risks also put an organization in danger of non-compliance with several regulations such as HIPAA, PCI, Sarbanes-Oxley and others? Running unsupported operating systems, such as Windows XP, might be enough to make the Federal government take a closer look at organizations which are bound by these important regulations. This non-compliance translates to any fax server infrastructure that may be running on Windows Server 2003. If you have a fax server deployed with Windows Server 2003, take a deep breath, and call OpenText. If you need an on-premises fax server running on either Windows 2008 or Windows 2012, we’ve got you covered. Or eliminate the need for any operating system for your faxing by using OpenText Fax2Mail, an enterprise-grade, 100% cloud fax service. We can do that, too. Either way, don’t let your operating system put your faxing operations at risk of non-compliance. For more about the end of support for Windows Server 2003, find more information here!

Read More

Expanding the Banking Universe with a Mobile-Only Play

Mobile phones continue to help break new ground in the world of Banking. A very interesting example is called Atom Bank, which is in the United Kingdom today, or is it everywhere? Atom Bank was recently awarded a banking license and plans to commence their operations later this year. They have already raised around 25 million pounds (about $39 million). So why is this so interesting and different? Atom Bank will operate only through a mobile app. That is right, they just have an app. Of course there will be no branches, and they will not have a website initially. This is a strange strategy as how will customers be able to find them unless they read my blogs? They claim that customers will be able to open accounts and carry out all their banking activity using only a smartphone. The also said they want to “set new standards for the banking sector” when it comes to technology. Well this matches quite well to what Millennials are thinking about innovation in banking coming from technology companies, not from banks. Viacom Media Networks did research and came up with the Millennial Disruption Index, which is copied below. Notice that 33 percent of Millennials do not think they will ever need a bank, and nearly half are counting on technology start-ups to overhaul the way banks work. Talk about supply meeting demand, and here comes Atom Bank. Or maybe they should be called Atom Software, the smartphone technology company with an app. Banking on Mobile Source: Viacom Media Research Atom is the latest in a string of technology companies shaking up the banking industry. Who would have thought that Apple would create a payment service a la PayPal? It has certainly done well so far. Anybody know a user or two of Venmo, the under 30 set’s current favorite to make small payments to each other? This is not futuristic; they already exist and work well today. So, will Atom Bank be what Millennials are longing for? Well there are a few challenges, or what we might call complexities. They will need to work with a regular retail bank for mundane things like checking and cash deposits. I don’t know if they will be responsible for Know your Customer (KYC) or will have deposit limits or concern themselves about anti-money laundering and SARS (Suspicious Activity Reports). Perhaps the brick-and-mortar bank they plan to partner with will do the heavy compliance lifting for them. Since Atom Bank is all about a high quality customer experience with a smartphone, they are certainly addressing what Millennials are looking for. They are not yet sharing all aspects of what they plan to do, as they said they do not want to assist potential competitors. They did announce that they will have biometric security, 3D visualizations and gaming technology. Sounds like fun! An app on your smart phone will do all of that? Mobile phones, now smartphones, have come a long way. Even if all of this works as planned and Atom Bank is very successful, they will have fierce competitors from startups as well as established organizations. But if they capture the hearts, minds and bank accounts of the Millennials before others do, they will be very successful and the reality of the Millennial Disruption Index will become even more obvious.

Read More

Achieving Equal Access in Health Care Information

As per a report published by the Equal Rights Center in 2011, blind and visually impaired individuals routinely face barriers in receiving information regarding their health care including documents such as test results, prescriptions, etc., benefits information such as Explanation of Benefits, eligibility and termination information and e-delivered communications such as billing statements, summary of benefits and more in accessible formats. This includes information received by visually impaired Americans being covered by Medicare and Medicaid. These individuals are often presented with work-around solutions, such as relying on friends, family or healthcare practitioners to read their private medical information to them. Not only is this a breach of the individual’s privacy, but also leads to outcomes that could result in poor health and loss of benefits. The Centers for Medicare and Medicaid (CMS), an agency of the US Department of Health and Human Services, is the largest single payer for health care in the United States. As per data from the CMS: 90 Million Americans receive healthcare coverage through Medicare, Medicaid and the State Children’s Health Insurance Program. Approximately 4.3 million individuals over the age of 65 report some form of visual impairment. There are also approximately 700,000 Medicare beneficiaries between the ages of 21 and 64 who have some form of visual impairment. Private healthcare insurers have been contracted by the Centers for Medicare and Medicaid Services to offer Medicare and Medicaid programs, and these insurance providers must meet federal regulation i.e. Section 508, requiring that they ensure access to and use of their websites and digital documentation to people with disabilities, including the blind or visually impaired individuals. Non-compliance could lead to penalties and the loss of lucrative contracts for insurers. It is therefore no surprise that document (e.g. PDF) accessibility is a hot-button issue for government and even private healthcare insurers contracted by the CMS. As “public accommodations” under the Americans with Disabilities Act (ADA), healthcare insurers are generally well aware of their legal responsibility to customers with disabilities such as visual impairment, and are quite used to complying with these regulations. But now that accessibility requirements are expanding into cyberspace, healthcare insurers need to find appropriate technology solutions for this new challenge. Until a couple of years ago, it simply had not been possible for healthcare insurers to create high-volume, communications and documents in accessible PDF format. The sheer scale of production, with documents numbering in the thousands or millions, precludes manual remediation because of several limiting factors: Costs of manually remediating documents Delivery time due to the laborious nature of manual remediation Stringent accessibility tagging requirements OpenText has created an automated, software-based solution to address these very limitations. The OpenText Automated Output Accessibility solution can generate accessible PDFs from any high-volume, system-generated input print stream or other formats quickly and efficiently, while keeping storage size at bay. The solution was designed using thousands of man-hours worth of very specific experience and expertise in the system-generated document accessibility space, and our industry-leading transformation engine enables generating accessible output in the milliseconds. In fact, the output generated from this solution has been reviewed by the National Federation of the Blind and other prominent organizations for the visually impaired. Learn more about the OpenText Automated Output Accessibility solution at http://ccm.actuate.com/solutions/document-accessibility.

Read More

Big Data Is Still a Game Changer, but the Game has Changed. Here’s How.

Not long ago, organizations bragged about the large volume of data in their databases. The implied message from IT leaders who boasted about their terabytes and petabytes and exabytes was that company data was like a mountain of gold ore, waiting to be refined. The more ore they had, the more gold – that is, business value – they could get out of it. But the “bigness” of Big Data isn’t the game changer anymore. The real competitive advantage from Big Data lies in two areas: how you use the data, and how you provide access to the data. The way you address both of those goals can make or break an application – and, in some cases, even make or break your entire organization. Allow me to explain why, and tell you what you can do about it – because mastering this important change is vital to enabling the digital world. How Big Data Has Changed Each of us – and the devices we carry, wear, drive, and use every day – generate a surge of data. This information is different from Big Data of just a few years ago, because today’s data is both about us and created by us. Websites, phones, tablets, wearables and even cars are constantly collecting and transmitting data – our vital stats, location, shopping habits, schedules, contacts, you name it. Companies salivate over this smorgasbord of Big Data because they know that harnessing it is key to business success. They want to analyze this data to predict customer behavior and likely outcomes, which should enable them to sell better (and, of course, sell more) to us. That’s the “how you use data” part of the equation – the part that has remained pretty consistent since market research was invented more than 100 years ago, but that has improved greatly (both in speed and precision) with the advent of analytics software. Then comes the “how you provide access to data” part of the equation – the part that highlights how today’s user-generated Big Data is different. Smart, customer-obsessed businesses understand that the data relationship with their consumers is a two-way street. They know that there is tremendous value in providing individuals with direct, secure access to their own data, often through the use of embedded analytics. Put another way: the consumers created the data, and they want it back. Why else do you think financial institutions tout how easily you can check balances and complete transactions on smartphones, and healthcare companies boast about enabling you to check test results and schedule appointments online? Making your data instantly available to you – and only to you – builds trust and loyalty, and deepens the bond between businesses and consumers. And like I said earlier, doing so is vital to enabling the digital world. The New Keys to Success But when a business decides to enable customers to access their data online and explore it with embedded analytics, that business must give top priority to customers’ security and privacy concerns. In a blog post, “Privacy Professor” Rebecca Herold notes that data breaches, anonymization and discrimination rank among the Top 10 Big Data Analytics Privacy Problems. Her post is a must-read for organizations that plan to provide data analytics to customers. To underline Herold’s point, Bank Info Security says that personal data for more than 391.5 million people was compromised in the top six security breach incidents in 2014 – and that number does not include the Sony breach that made headlines. Security and privacy must be a primary consideration for any organization harnessing Big Data analytics. Remember what Uncle Ben said to Peter Parker: “With great power comes great responsibility.” Meeting the privacy and security challenges of today’s user-generated Big Data requires a comprehensive approach that spans the lifecycle of customer data, from generation through distribution. If you want guidance in creating such an approach, check out the replay of a webinar I presented on June 23, Analytics in a Secure World. My colleague Katharina Streater and I discussed: The drivers and trends in the market What top businesses today do to ensure Big Data protection How you can secure data during content generation, access, manipulation and distribution Strategies for complying with data security regulations in any industry If you watch the replay, you’ll come away with great ideas for securing data from the point of access all the way through to deployment and display of analytic results. We explained why a comprehensive approach minimizes the risk of security breaches, while simultaneously providing a personalized data experience for each individual user. We closed the program by explaining how OpenText Analytics and Reporting products have the horsepower required to handle immense volumes of data securely. We showed how the OpenText Analytics platform scales to serve millions of users, and explained why its industrial-strength security can integrate directly into any existing infrastructure. Please check out Analytics in a Secure World today. Privacy Please image by Josh Hallett, via Flickr.

Read More

8 Top Considerations for a Successful Cloud Partnership

As your organization embarks on creating a cloud strategy, there are many things to consider. What are the top benefits you are looking to achieve by moving workloads into the cloud? What areas of your business are you willing to consider as cloud or hybrid cloud implementations? Most of all, as you move into the cloud you are faced with extending your infrastructure and IT team to include your cloud provider. What are the key things you should consider as you determine the key cloud partnerships you will embrace? One Size Does Not Fit All Developing a cloud strategy is an exercise in understanding your business process, workloads, security rules, compliance requirements, and user adoption–just to name a few. Not a simple task, moving business to the cloud might mean a combination of both deployment and costing strategies. Options for on-premises, public cloud, private cloud (both on and off site), Hybrid cloud, SaaS, PaaS, and IaaS all offer organizations flexibility but can also be confusing. What offering is ideal for which business process to generate the highest efficiency and cost benefit? There is no one-size-fits-all solution, which means IT leaders need to be prudent about cloud options and work with their vendors to ensure they get the most value for their investment. Information Matters As we live in the digital age, many organizations recognize the value of information and place significant priority on protecting it. Information Governance, knowing what information you have, where it is and what you need to do with it, has never been more important. When organizations look at moving information to the cloud they need to be extra vigilant to ensure that their information policies and compliance regulations are both understood and upheld by their cloud partner. The value and level of control required over information should play a part in an organization’s decision of what applications and what data will reside in the cloud, on premises or as part of a hybrid cloud implementation.

Read More

Upcoming Accessibility Deadlines for Federal Student Loan Statement Servicers

Section 508 and WCAG compliance has been an important mandate for the U.S. Federal Government, and the Department of Education is one of the government agencies actively working towards meeting these requirements for visually impaired students receiving federal loans. The Department of Education has now issued time frames and deadlines for WCAG compliance to student loan servicers that generate and distribute federal Direct Loan Program (DLP) statements, notices and communications. Accordingly, all loan statements, notices and communications, forms and websites need to be made available to borrowers in accessible read-only HTML format or non-proprietary equivalent (e.g. accessible PDF), complying with Section 508 of the Rehabilitation Act and WCAG 2.0, within a few short months. The Federal Government has also established additional time frames for testing and verification of accessibility compliance before the actual deadline, for making accessible content available to borrowers. Loan service providers typically generate statements, notices and communications in print stream or PDF formats. Making these accessible using traditional methods is manual, laborious, time-consuming and expensive. Visually impaired students therefore typically experience a lag in receiving critical information included in their statements, notices and communications in formats they can access due to the time lines and delays involved in generating manually tagged accessible PDFs, Braille, Large Print, Audio or HTML formats. This is far from ideal, and visually impaired students are now expecting to be treated the same as anyone else would be with regards to the timely availability of important information. Most Federal Student Loan Statement Servicers are still struggling to find a solution that meets compliance and can be made operational before the required deadlines. While building a new solution from scratch is often how IT departments approach technology challenges, given the tight timelines involved and the level of accuracy, expertise, testing capabilities and technological know-how required in building a solution, it is not an efficient way of addressing this particular requirement. Key points for Federal Loan Service Providers to consider when implementing an automated accessibility solution: The solution should be easy to implement and should not require management of multiple formats Storage costs should not increase as a result of the solution The solution should be infinitely scalable and be able to support the demands of generating millions of documents without performance issues The OpenText Automated Output Accessibility solution addresses each of these requirements, and can generate accessible PDFs from any input print stream format quickly and efficiently, while keeping storage size at bay. The solution was designed using thousands of man-hours worth of very specific experience and expertise in the system-generated document accessibility space, and our industry-leading transformation engine enables generating accessible output in the milliseconds. In fact, the output generated from this solution has been reviewed by the National Federation of the Blind as well as the Department of Education. Using best-of-breed technology and accessibility-specific expertise is the only fool-proof way of meeting the tight time frames and deadlines defined by the Department of Education. Learn more about our solution here.

Read More

Replacing Your Legacy Archiving System is a Pain. No More!

Large organizations rely heavily on rapidly evolving technology to thrive in today’s competitive business environment. And one of these vital solutions is the electronic archiving system, which is expected to maintain a comprehensive and accurate record of customer information such as statements, bills, invoices, insurance policies, scanned images and other organizational information that is essential to the survival and growth of the enterprise. It is critically important for modern organizations that these assets are retained in an efficient and intelligent manner so that they can be retrieved on-demand for customer presentation, compliance, auditing, reporting, etc. Like all information technology, archive systems too need to be upgraded from time to time. Depending on the requirements of a progressive organization, this could even mean replacing the existing systems with a brand new solution. The first step toward an effective solution, however, is identifying the shortcomings of the current system in the context of your evolving business needs. Here are a few tell-tale signs that your archiving system hasn’t been keeping up with your growth: Waning vendor support – It doesn’t receive enough attention from the vendor in terms of upgrades and support. Costly Upgrades – When it becomes prohibitively expensive to boost performance or add new capabilities/features. New Media Deficit – The system falls short on receiving and serving up content to the multitude of customer channels, including web, social, mobile, tablet, text, messages, email, and print. Social Disconnect – Perhaps the most easily recognizable symptom of an outdated archive system is the inability to connect with social media such as Facebook and Twitter accounts and capture and store customer information. Content Inaccessibility – Users complaining of an inability to extract data for targeted messaging, trans-promotional marketing, analytics, and other sales and marketing functions. Compliance Infractions – Inability to store or retrieve content that could lead to investigations, fines, license revocations, or lawsuits. If you can relate to one or more of these issues then upgrading to a more contemporary solution may be the best way forward. An example of archive migrations we have conducted for our customers and have extensive experience in is for the Mobius/ASG-ViewDirect® system. The challenges often highlighted for this system include some of those listed in the points above, as well as other issues typically seen in legacy archive systems, such as a lack of a coherent product roadmap, high costs, and an outdated user experience. Customers are often certain about the need for migration but are unsure about how to move to a new archive without disrupting critical business functions. The only real roadblock to improved performance then, is the migration itself. The process can be laborious and cumbersome, with key performance factors around the ability to perform complex document migrations on time and within budget, while maintaining access for existing applications, repurposing information locked in legacy document formats and meeting regulatory requirements. While enterprise IT departments have stringent migration requirements, modernizing your archiving system doesn’t necessarily have to be painful, and OpenText®’s ECM Migration service has a methodology in place to make sure it isn’t. The service provides a way to efficiently migrate content out of legacy archiving systems like Mobius/ASG-ViewDirect® and others to a more contemporary solution such as OpenText’s Output Archive (formerly known as BIRT Repository). Some of the unique benefits of using OpenText’s ECM Migration Service for Mobius migrations include the ability to migrate content out of Mobius without the need to purchase expensive Mobius APIs and the capability to read directly from the underlying file structure using the Mobius Resource Extractor, bypassing the need for Mobius to be running. Our ECM Migration methodology has been designed keeping best practices gleaned from many successful engagements and utilizes award-winning technologies to automate migration in a lights-out environment without disrupting day-to-day business activities. The ECM Migration team has worked with many ECM systems including IBM® Content Management OnDemand (CMOD), IBM® FileNet® Image Services, IBM®FileNet® P8, ASG-ViewDirect®, and others for decades, and the maturity of our solution proves it. Our technology and DETAIL™ Methodology enables us to: Manage all aspects of a migration Cut up to 6 weeks off of the initial planning Use standard logging on a single platform Provide multi-threaded support out-of-the-box Implement process flows, advanced logic and routers through drag-and-drop interfaces without the need for scripting Connect to and pool the connection with multiple databases and repositories Run processes concurrently, by thread or broken down by stage (i.e. Load, Extract, Convert) Handle massive volumes of data, documents, images and metadata So, if you think it’s time to say goodbye to your current archiving system, know that there are experts out there who can help you define your requirements and deploy an appropriate solution that will take you where you want to go. And remember – organizations that evolve, thrive. Others perish.

Read More

Accessible PDF Discussion: Generating Desktop-Level and Enterprise PDFs

To help them achieve complete web accessibility, organizations require a viable technology solution for automatically generating high-volume (i.e., enterprise-level) personalized customer communications in Accessible PDF format. Such a solution would give blind and visually impaired people immediate and equal access to electronic documents, which they currently do not have. This series of blog posts explains why demand is increasing for customer communication documents in Accessible PDF format, describes current industry practices for producing these documents, and introduces a new technology option that enables organizations to keep PDF format as their default electronic format for high-volume, web-delivered documents of record while enabling full accessibility and usability of those PDFs for individuals who are blind or visually impaired. In a recent blog posts, we examined the Drivers behind Web Accessibility and the Best Practices for PDF and Accessible PDF. In this post, we will look at different approaches to generating PDF and Accessible PDF. How PDFs are Generated PDF documents are either created individually at the desktop level by human operators or in large batches at the enterprise level by sophisticated software applications. Desktop Level At the desktop level, PDF documents are manually created by individuals using word processors such as Microsoft Word, graphic design programs such as Adobe Creative Suite, or other software applications. These low-volume ad hoc documents typically include annual reports, newsletters, marketing collateral, training manuals, program support material, and other public-facing documents. PDF versions of scanned hardcopy documents may also be created at the desktop level. Enterprise Level At the enterprise level, PDF documents are automatically created in large volumes by powerful software applications (e.g., document composition engines) supported by enterprise-grade IT infrastructure such as relational databases, high-speed servers, and large-capacity storage devices. These high-volume documents are typically customer communications such as notices, statements, bills, and invoices that are personalized with customer account data for individual recipients. Because they contain confidential information, customers usually access such documents through secure, password-protected web portals. Reasons Why PDFs are Inaccessible by Default At the desktop level, PDF documents are typically produced by converting existing native documents (e.g., Microsoft Office documents, Adobe Creative Suite documents) into PDF format. During the conversion process, the software application attempts to formulate a tag structure based on the contents of the native document. If the author of the native document has not followed accessibility guidelines, explicitly identifying elements, such as headings, and properly formatting lists, tables, and other items, the software simply assigns a tag structure based on its best algorithmic guess about the document, often resulting in errors. While software applications that generate Accessible PDF output are able to faithfully reproduce the appearance of a native document, they cannot infer a logical reading order or produce meaningful alternative text for graphical elements. When alternative text for images is missing from the native document, the PDF conversion engine assigns generic identifiers, e.g. “Image 54”, which are meaningless when read aloud by a screen reader program. Unfortunately, the automated conversion process itself is error-prone so even when native documents have been designed with accessibility in mind, Accessible PDFs require post-conversion inspection and adjustment by a knowledgeable technician to make their contents and tag structure 100% compliant with the specified accessibility standard (e.g., WCAG 2.0, Level AA). For example, during conversion, tables may be incorrectly tagged as graphics, or table data may become dissociated from its corresponding column or row header. Similarly, text headings may not be detected and lists without active and meaningful destinations may only be recognized as plain text. In short, desktop PDF conversion technology is far from perfect. Regardless of the quality of the native source document, PDFs created on the desktop must always be manually remediated and inspected (page by page) to ensure compliance with the accessibility standards required by assistive technologies. Traditionally, this is an expensive, labor-intensive, time-consuming process. Traditional Approaches to Generating Accessible PDFs Desktop Level Existing low-volume, ad hoc PDF documents that need to be made accessible are manually remediated by human operators using specialized desktop software applications such as Adobe Acrobat Professional, CommonLook, Abbyy, and others. To lower costs and achieve faster turnaround times, many organizations contract out manual remediation to specialized third-party service providers that operate more efficiently than smaller in-house teams. To facilitate new document creation and improve quality, organizations develop standard accessible templates for individual document types (e.g., Microsoft Office) and enforce their use by employees, vendors, and contractors. Once converted to PDF, every page of a new document is subjected to automated and manual accessibility/usability testing and remediation to ensure that it conforms to accessibility standards such as WCAG 2.0, Level AA. Enterprise Level At the enterprise level, organizations use powerful software applications to dynamically generate high-volume PDF communications for online presentment to customers. However, manual remediation does not scale for enterprise production of Accessible PDFs. The large number of documents created every month—thousands, millions, tens of millions, or more—precludes manual remediation as a viable option, if only as an accommodation to the percentage of customers requiring the accessible electronic format. Due to sheer volume, manual remediation is cost and time prohibitive. Until recently, there was no automated technology solution either. Organizations simply had no way to produce high volumes of personalized customer communications in Accessible PDF format. That has now changed. In the next blog post, we will take a look at an innovative software system that automatically remediates and creates high volume Accessible PDFs.

Read More

Data Driven Digest for May 1

Next Monday is Star Wars Day, and we’re getting a jumpstart on the holiday with today’s Data Driven Digest. Our first item analyzes the latest teaser trailer for the next installment – yes, there’s data there – and we remain out of this world to admire a video data visualization by an amateur astronomer. We finally plummet back to earth to visualize the value of our personal data. Crash Course: It’s common practice to start with data and create visualizations. When Rhett Allain saw the new teaser trailer for Star Wars: The Force Awakens, he did just the opposite: he observed the land speeder cruising across a desert planet (which, according to news reports, is not Tatooine) and calculated its speed. Published in Wired Science, Allain’s dissection of the scene is a terrific work of video forensics, and includes all of his underlying data. The Force is strong in this one. Location, Location, Location: While we’re out in space, check out the video above by Tony Rice, a software engineer, amateur astronomer, and “solar system ambassador” for NASA. The video visualizes 24 hours of Global Positioning Satellite (GPS) coverage by tracing the path of all of the GPS satellites and showing the overlap of their signals on the surface of the earth. Rice created the video using JSatTrak and orbital data from NORAD. Rice has also created a similar video showing the “A-Train” – a constellation of Earth-observing satellites that travel in a sun-synchronous orbit. No mind tricks here. Personal Cost: Unlike GPS – which is a single system that works worldwide – the web of privacy and security laws and regulations that cover the globe are a madly mixed bag, and so are people’s valuations of their personal data. In an article in the May issue of Harvard Business Review, authors Timothy Morey, Theo Forbath and Allison Schoop surveyed consumers in the United States, China, India, Great Britain, and Germany about how they value web search history, location, health history and nine other types of personal data. Their chart of variances from country to country (a snippet of which is above; click through) is part of an article well worth reading. Like what you see? Every Friday we share favorite examples of data visualization and embedded analytics that have come onto our radar in the past week. If you have a favorite or trending example, share it: Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: April 24: Global cravings, California roadkill, Bay Area costs, tech skills April 17: Rage in the Iliad, comparing country sizes, 25,000 songs April 10: Stick charts, water for food, UFO sightings, dataviz videos

Read More

How will 3D printers help improve banking KYC?

artificial intelligence

Yes, you just read this title correctly. 3D printers are widely known to offer the potential to be a game changer in the physical supply chain across many sectors and industries. However, the opportunity in Financial Services seems less likely, particularly in any form of real, practical application. Do you agree, or not with my statement here? Well, either way, I suggest you read on to find out more. KYC – Why is it a sensitive subject? Today, every Financial Institution runs their KYC processes themselves, for lots of reasons. At the top of the list of reasons is the reputational and financial risks that remove any appetite to give control of KYC to a 3rd party, such as a shared utility or data service. A bank caught by its regulator servicing the wrong client is usually exposed to millions or billions of dollars in fines. As we’ve seen in the news over the last 3 years, such occurrences fall into the public domain, usually with lasting reputational damages. Approach to KYC nowadays and alternatives The typical KYC process is executed manually, leveraging a combination of paperwork, de-materialisation and archiving. Overall, it is a costly and lengthy process that happens every time a new client comes on-board, when a new signatory is allowed into the relationship. It also needs to be refreshed and verified regularly. KYC processes also delay the “time-to-revenue”; typically the period of time between the agreed contract and the first day of transaction processing. A number of initiatives have been introduced over the last few years, trying to tackle this challenge from several angles. One common method that keeps coming back is to enable KYC to be done once and for all, and shared between all Financial Institutions and Counterparties. This idea of a shared utility, which would enable a client, counterparty, as a business or as an individual, to “passport” its KYC identity across all its financial suppliers. The benefits and advantages seem very compelling and include: reduced costs of processing KYC, reduced time-to-revenue for the Financial Suppliers, less hassle for the clients and counterparties. It’s a win-win for everybody, isn’t it? Why is this nut so tough to crack? Cost reductions and improved client experience benefits look very small when put in perspective with the potential risks and costs associated with non-compliance. We’re talking about millions or billions of dollars in fines, the risk to lose a banking or insurer’s license, even shutting down the business entirely. The incremental gains and advantages of digital and shared KYC do not yet appear to offset these risks. Every few months we read about a government fining a Financial Institution for facilitating illegal activities, a shared utility or international business losing its clients personal information and payment details to hackers. Surely this is not a good industry backdrop to encourage digital KYC! What about 3D printers, what’s the link? As a consumer, I find that home 3D printers are overpriced gadgets with little practical purpose. As a B2B professional working with the largest Supply Chains in the world, the potential opportunity just blows my mind. Analysts agree that most global Supply Chains will be affected, shifting current patterns of commerce and logistics to a complete transformation over the next few decades. The biggest shift will happen around companies focusing on the production of Intellectual Property, delivered in the shape of Digital Assets – such as the files containing the 3D model and assembly specifications for their products. Other companies will focus on the physical production of commercial items, based on those Digital Assets. Analysts agree that most of this world will never be exposed to consumers, just like the world of global logistic is today. The disruption: Digital Assets and Digital Identity If you download music, movies or games regularly, (legally of course) then you probably know about Digital Rights Management (DRM). This early 2000s technology somewhat enabled contents producers and commercial online sharing platforms to ensure you are the only person able to play a track, or rent a movie for a certain period of time. 3D printers bring a new, bigger compelling event for such DRMs, the opportunity to control who can print a product, how many copies, for how long, with verified raw materials and on certified printing equipment. There are typically two facets for this technology: the Digital Asset itself (the 3D design combined with printing requirements and authorised users), and the Digital Identity (the certified, authenticated businesses and users). You see where this is going now… Digital Identity management will spread fast and wide, surfing the 3D printing revolution both for B2B and consumer markets. Digital Assets owners and producers will have an enormous stake and KYC shared utilities will probably continue to experiment and grow over the next couple of years, with more and more “use cases” coming into the frame. I don’t believe that shared utilities for a single industry will gather enough critical mass. Payments and Cash Management itself is already changing, with the introduction of PSD2 rules in Europe, the rise of Blockchain technology and distributed payment ledgers. If we look broadly, banking users (business or consumers) also begin to require a unique Digital Identity for other aspects of their life. Combining innovation with regulation over the next five years is going to be key and the winner will likely manage to combine Digital Identity across several industries and markets, similar to the IT Certificates Authorities (CAs) that spread across all industries since the early 2000s.

Read More

Wearables, Big Data, and Analytics in Healthcare

As wearable technology – including smartwatches, fitness trackers, and even clothing and shoes with integrated sensors – moves into the mainstream, healthcare organizations are exploring ways to use these devices to simplify, transform and accelerate patient-centric care. Their goals include boosting people’s health, improving patient outcomes, streamlining manual processes and opening new avenues for medical research and epidemiology. Analytics, data visualization and reporting are central to those efforts. Transforming Data into Insight and Action Wearables today can monitor and gather wearers’ activity level, heart rate, and other vital signs; reward wearers for healthy activities and habits; and alert the wearer and others, such as doctors, emergency responders and family members when problems arise. “Wearable health technology brings three distinctly beneficial trends to the table – connected information, community, and gamification,” writes Vala Afshar on the Huffington Post. “By harnessing this trifecta, healthcare leaders have new ways to build engagement and create accurate, far-reaching views of both personal and population health.” Wearables are both producers of data (collecting and transmitting wearers’ data) and consumers of data, receiving and displaying information about the wearer’s well-being and progress. Wearables are textbook generators of big data, with high velocity, volume and variety. And as in any big data scenario, transforming that data into insight and action requires a powerful, scalable analytics, data visualization and reporting platform. Wearables in healthcare share many characteristics with the networks of sensors in Internet of Things (IoT) applications. But healthcare adds additional complexities and wrinkles, particularly regarding security. With IoT, everyone agrees that security is important, but the rules and standards vary and are subject to debate. However, when individuals’ personal health data is in the mix, more (and more complicated) laws, security regulations and privacy concerns kick in. “A person’s health information is particularly sensitive,” writes Victoria Horderen in the Chronicle of Data Protection. “[B]oth in a legal sense (because health information is categorized as sensitive under EU data protection law) but also in an obviously everyday sense – people feel that their health information (in most but not all circumstances) is private.” Horderen writes specifically about the EU Data Protection Regulation, but the points she makes apply globally. The takeaway, I think, is that a platform supporting a wearable initiative in healthcare requires a robust, proven security foundation. Many Use Cases With a flexible big data platform supporting wearables, many healthcare use cases arise. Most of these are possible with today’s technology, while others could be on the horizon using future generations of devices. Some use cases include: A person under observation for heart disease can use a wearable to monitor his or her heart rate 24/7, not just while at the doctor’s office. The wearable enables collection of both historical and point-in-time data, and the platform enables in-depth analysis of the data. Alerts presented on a smartwatch can provide customized encouragement for good behavior (such as walking or stair climbing) and positive lifestyle choices (such as getting enough sleep). Such uses are ripe for gamification; if the wearer walks a certain number of steps (customized for the individual) rewards are unlocked. People are more likely to embrace a wearable if it provides an element of fun and positive feedback. Data from large numbers of wearers can be anonymized and aggregated to perform epidemiological studies. Data can be segmented by geography, activity level, and demographics if wearers choose to opt in. A wearable paired with a GPS-enabled smartphone can transmit coordinates and pertinent data to first responders in case of an emergency and alert family members of the wearer’s status. A surgeon wearing smart glasses can monitor patient vital signs and other medical equipment in real time during an operation without turning away from the patient. Think Small, Think Big As these use cases indicate, a platform for wearables in healthcare needs to operate on a micro level, sending customized, personalized alerts, recommendations and actions to individuals based on their own data. But a platform should also enable macro-level analysis of vast quantities of data to spot trends and identify correlations within large populations. The ability to analyze data on a large scale not only holds promise for medical research, but it also improves the wearable’s value to the individual user: An intelligent platform with access to individual and aggregate data can, for example, tell the difference between an heart rate spike due to exercise – a good thing to be encouraged – and a cardiac episode requiring attention and intervention on a case-by-case basis, not just a pre-set threshold. One last bit of good news for healthcare providers who want to embrace wearables: Doctors are more trusted than any other group with consumers’ personal data. According to research by Timothy Morey, Theo Forbath and Allison Schoop and published in the May 2015 issue of Harvard Business Review, 87 percent of consumers find primary care doctors “trustworthy” or “completely trustworthy” with their personal data. That percentage is greater than credit card companies (85 percent), e-commerce firms (80 percent), and consumer electronics firms (77 percent), and much higher than social media firms (56 percent). As wearable use grows, that healthy goodwill is worth building on. Smartwatch image by Robert Scoble

Read More

To Keep or Not to Keep: A Records Management Question

With 90% of the world’s data generated over the last two years and enterprise information growing at an exponential rate, the ability to effectively manage and govern the lifecycle of important electronic business information is more important than ever. Recently, OpenText CMO Adam Howatson sat down with Tracy Caughell, Director of Product Management, to discuss worldwide records management regulations, the consequences of non-compliance, and the cost benefit to organizations who embrace records management solutions. According to Caughell, one of the benefits of OpenText Records Management is the ability to identify when records can be disposed of. “We all know that we need to get rid of stuff,” she says. ” Our strength is allowing [customers] to do it in a way that is defensible, a way that they can prove they did what they were supposed to do, when they were supposed to do it, with minimal disruption to their daily activities.” Watch the video below to learn more, or click here. From capturing, to classifying, to managing information, having an enterprise-wide records management strategy can help organizations to comply with laws, external regulations and internal policies. By providing a structured and transparent way to maintain records from creation through to eventual disposition, Records Management can help to enhance corporate accountability, ensure regulatory compliance, and make it easier to find the information you need. More Information: Read how Sprint uses OpenText Records Management to take control of their records and documents. Learn more about OpenText Records Management solution. photo courtesy of Marcin Wichary

Read More