Analytics

Leverage Agile BI to Deliver the Next Wave in Business Intelligence [Webinar]

This coming weekend, 330 dogs will pack Pier 94 in New York City to compete in the Westminster Kennel Club’s Masters Agility Championship. The dogs will dive through rings, weave through poles like slalom skiers, navigate high walkways (please, don’t call them catwalks), slink through tunnels and leap over hurdles, racing against the clock. Dog agility competitions are great fun to watch – Fox Sports 1 will broadcast the finals on Sunday – in part because, with the right skills, even the puniest pooch might become top dog. The race for agility isn’t confined to canines. Enterprises constantly strive to be agile, because agility helps organizations quickly and effectively respond to changing conditions. Business Intelligence (BI) should be agile, too; a recent report from Forrester Research, Drive Business Insight With Effective BI Strategy, proclaimed that “the future of BI is all about agility” and noted that Forrester “tracks more than 20 technologies that can make BI environments agile.” If there’s one thing you can take away from that statement, it’s this: Agile BI is important, but it’s not simple. To help you make sense of this complicated topic, Actuate (now OpenText) has invited the author of that Forrester report, vice president and principal analyst Boris Evelson (@bevelson), to present a free webinar on February 24 at 11:00 a.m. PT. Titled “Leverage Agile BI to Deliver the Next Wave in Business Intelligence,” Evelson’s presentation will cover four critical steps that enterprises should follow when strategizing BI efforts to achieve business goals: Prepare for your BI program: In noting the importance of preparation, Evelson’s report urges companies to “Understand that BI is a journey toward a moving target – and not just another project.” In the webinar he will elaborate on the ways organizations can prepare themselves, both in processes and technology, to succeed with agile BI. Set the right strategy: Forrester has long advocated a “discover, plan, act, optimize” methodology – the basis of its Innovation Playbook – for enterprise IT projects. Evelson will explain how Forrester’s step-by-step methodology can streamline BI strategy. Build an open, agile, and scalable environment that enables OEMs and SaaS providers to embed BI insights into apps and devices: An agile BI environment requires more than technology; its success also hinges on staffing, organization and processes. In his presentation, Evelson will talk through some of these requirements and put them in context for Original Equipment Manufacturers (OEMs) and Software as a Service (SaaS) providers who embed analytics and BI in other products. Continue to measure and optimize your BI environment: Evelson is an advocate for “applying BI to BI” – that is, using reporting and metrics rather than subjective, qualitative measures to gauge the efficiency and effectiveness of BI efforts. He’ll talk about some of the Key Performance Indicators (KPIs) that should be measured to help an agile BI program to continuously improve. Evelson will wrap up his presentation with a question-and-answer session. Any queries that aren’t answered in the time allotted for the webinar will be addressed afterward via email. Whether your company is the alpha dog in its sector or a scrappy mutt at the back of the pack, agile BI can help you improve your game. Companies that want to ensure a successful agile BI strategy should sign up for Everson’s webinar today. (If you can’t attend, sign up anyway to get a replay link after the webinar is completed.) Registration is free; we won’t make you jump through hoops. Photo of a beautiful dog posing on an agility course by Bill Garrett. Boris Evelson portrait courtesy Forrester Research. 

Read More

Data Driven Digest for February 6

Each Friday we share some favorite reporting on, and examples of, data driven visualizations and embedded analytics that came onto our radar in the past week. Use the “Subscribe” link at left and we’ll email you with new entries. Hot shot: It’s been fascinating to watch Anthony Davis evolve since he entered the NBA in 2012. In a great story on Grantland, Kirk Goldsberry dissects and admires that evolution, saying Davis “has turned the increasingly out-of-style territory within the 3-point arc into his personal basketball laboratory.” He also says Davis’ favorite shot is not a flashy dunk nor a dramatic three-pointer, but a simple jump shot from above the free throw line. Davis is also improving his old-school bank shot. Goldsberry backs up these observations with the terrific graphic above. Actually, we’ve posted only one of two graphics; the other one shows Davis’ shooting in his rookie season. Click through to see and compare them both; the difference between the two tells the story. Dry ideas: As other parts of the U.S. dig out from severe winter storms, Californians hope for rain this weekend. But we in the Golden State are constantly reminded that an entire winter of heavy rains won’t crack our persistent drought. The drought’s effects are felt far beyond the state’s borders: the USDA says “major impacts from the drought in California have the potential to result in food price inflation above the 25 year historical average of 2.8 percent.” On the map above, NASA’s Earth Observatory plots how the drought has affected California farmland; comparing images of active and idle farmland from 2011 and 2014, you see how much acreage has been taken out of production due to lack of irrigation water.   On the move: Randy Olson is back with another application of data science to issues that, frankly, aren’t very important. We highlighted his chart of gender-neutral names a couple of months ago. This time Olson has focused his talents on a profound question: Where’s Waldo? Spinning off from a 2013 article on Slate, Olson was determined to “pull out every machine learning trick in my tool box to compute the optimal search strategy for finding Waldo.” (Yes, he is aware that this is silly, but it’s his time to waste.)  The kernel density estimate chart above is the result of Olson’s calculations (the dotted line represents the spine of a Where’s Waldo spread), and he’s published it along with an optimal search path. We admire how Olson explains his questions, methods and conclusions, and we’re relieved to know he also spends time on bigger issues. Do you have a favorite or trending resource on embedded analytics and data visualization? Share it with the readers of the Actuate blog. Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: January 30: World population, Super Bowl geography, big game commercials January 23: SOTU tweets, Moore’s Law, Big Data roles January 16: Tallest buildings, Ohio State’s Elo rating, airport efficiency  

Read More

Embedded Analytics – Making the Right Decision

Think back to the last big purchase you made. Maybe you bought or rented a home, purchased a car, or chose a new computer or mobile provider. If you’re a smart shopper, you considered your decision from many angles: Does the product or service meet my needs simply and elegantly? Is the manufacturer solid and reliable? Will my choice serve me well into the future? We face similar questions when we decide on a third-party vendor to embed technology in the apps we build: Is the technology powerful enough? Is it easy to embed? Will the vendor be around in the future? Will the technology evolve and improve as my needs – and those of my customers – change over time? Elcom International faced such a decision almost a decade ago. Elcom’s software product, PECOS, digitizes the procurement process; Kevin Larnach, the company’s Executive Vice President of Operations, describes PECOS as “Amazon for business,” with extensive controls and business process integrations required by leading governments, utilities, businesses and healthcare providers. More than 120,000 suppliers provide products in PECOS through Elcom’s global supplier network, and PECOS is used by more than 200 organizations worldwide to manage more than $15 billion in total spending annually. Elcom decided on Actuate to provide the analytics for PECOS. Thanks to embedded analytics, PECOS  users avoid and reduce costs, get visibility into all aspects of the procurement process for oversight and audits, and reduce risks from rogue purchasing, off-contract purchasing, and slow, manual record-keeping. Larnach says embedded analytics has helped one PECOS user, the Scottish Government, accrue more than $1 billion in audited savings over the past seven years. Larnach told Elcom’s embedded analytics story in a recent free Actuate-sponsored webinar. He shared the virtual stage with Howard Dresner (@howarddresner), Chief Research Officer at Dresner Advisory Services and author of the Embedded Business Intelligence Market Study, published in late 2014. An on-demand replay of the webinar is now available. Dramatically Changing Requirements As Elcom added features and capabilities to PECOS over the last decade, and as its user base grew, the decision to embed Actuate’s analytics technology has been vindicated. “We’ve been able to work with [Actuate] as a sole vendor,” Lernach says. Actuate “satisfies all of our embedded BI requirements, which have changed dramatically over the years.” Larnach used this graphic to show how the capabilities in PECOS and Actuate’s analytics capabilities grew and evolved together. At the base of the pyramid we find basic transactional reporting. “Most of the embedded reporting and embedded analysis that we offered in early stages of our relationship [with Actuate] centered around transactional reporting,” Lernach says. This reporting isn’t limited to summary information; it includes line detail for each and every purchase. Accommodating user requests, Elcom built an extensive library of templates, forms and business documents with embedded analytics into PECOS. The ability to provide consistent, repeatable analysis led Elcom’s customers to want more; specifically, they wanted to perform periodic analysis of their procurement data. (That’s the second layer up on the pyramid.) The request made good sense; after all, PECOS tracks details of every transaction, and therefore creates an audit trail that begs for analysis. “Embedded BI provides analysis against those audit trails,” Lernach says, which both helps organizations uncover waste, fraud and abuse and also drives improved user behavior that locks in savings and efficient business processes. This ability to provide ongoing analysis has led to Elcom adding trend analysis and key performance indicators (KPIs) to PECOS – the third layer on the pyramid. Demand for those capabilities is growing among PECOS users, Lernach says. “We’re starting to do [dashboards and charting] as a standard feature of our software,” he explains, which leads to the tip of the pyramid: one-off analysis. “I see [one-off analysis] as a huge growth area for our organization, especially for customers who have been with us for many years,” Lernach says. Those customers have large volumes of transactional data to analyze – a full terabyte of data, in the case of the Scottish Government – and they want to eke out every bit of savings from that data that they can find. “When you take on a solution like ours, there are big savings areas up front because it’s a dramatic change from manual business processes to electronic ones,” Lernach explains. “But over the years, as you use [PECOS] and look for continuous improvement, it becomes more and more difficult” to find savings. But that’s exactly where one-off analysis capabilities now in PECOS help uncover “hidden gems,” Lernach says – such as a hospital system that saved hundreds of thousands of dollars by consolidating procurement from 11 types of latex gloves to three. “That could only be uncovered by the type of analysis that’s available through advanced BI tools – and some smart people, obviously,” Lernach says. Check out our free webinar to hear more about how Elcom uses embedded analytics in PECOS, and learn more about the powerful e-Procurement solution on the Elcom website. It’s the right decision. “Decide” image by Matt Wilson. P.S. If you want to embed analytics and are trying to decide whether you should leverage a third-party platform or create your own analytics from scratch, Actuate offers a free white paper, “11 Questions: Build or Buy Enterprise Software,” that helps you make the best choice. And if you need help deciding among embedded analytics providers, check out our infographic, “12 Questions for Providers of Embedded Analytics Tools.”

Read More

Exploring iHub Examples: Call Center Dashboard

A typical call center – serving a financial services firm, for example – handles countless unique inquiries every day. Because call centers generate an endless stream of valuable information, many companies use dashboards to understand call center operations and find ways to improve processes. In the iHub Examples we have reproduced a dashboard visualizing extensive, detailed information about a fictional financial services call center.  Our dashboard has two tabs, Call Analysis and Calls By State, that we’ll look at individually in this blog post. Call Analysis What it is: A call center dashboard (shown above) with three selectors on the left that affect four different charts. What to look for:  To see how the selectors affect the charts, experiment with a various combinations of Severity, Vendor, and Service. As you add and subtract selectors, the charts change. But that’s not the only way to alter the data view that the charts provide. For example, look at Hold Time Impact on Customer Satisfaction, the chart on the lower right of the dashboard. Say you want to isolate calls that were given poor ratings in January 2014; first click on the blue squares next to Good and Acceptable in the legend; those items will grey out and their data will disappear from the chart. Now move the sliders in the blue timeline below the chart to bracket January 2014. This is what you’ll see: The other charts in the dashboard also have interactive features, such as drill-down, drill-up, and multiple selection options. The entire dashboard also responds to your browser window size; drag your window wider and narrower and watch how the various dashboard elements respond. One other thing to know: This dashboard is standalone, run entirely by iHub, and not embedded in another application. Calls By State What it is:  An interactive, color-coded geospatial visual. (That’s data visualization-speak for a map, as shown above.) The legend to the right of the map gives three colors that quickly visualize the call volume for each state. What to look for: Hover over the map and specific figures pop up by state. Click on an individual state to get a detailed report; you can then Enable Interactivity on the state report to organize the chart. A map like this is easy to create with iHub’s out-of-the-box capabilities. One More Thing On the main iHub Examples page that we described in part 1 of this series, you’ll also find the three items shown above. We won’t describe in detail here; instead, we encourage you to explore them on your own, using the knowledge you’ve gained by working through the other example applications. The Flight Delay example, by the way, is closely related to Aviatio, an example application that we created specifically to demonstrate how iHub can power an application for tablets such as the iPad. Once you’ve looked at the example in iHub, Trial Edition, visit aviatioexample.actuate.com on your tablet to see it in action. (It works well on most browsers, too.)       Source: Actuate

Read More

Why Location Intelligence is Critical to Your App

Anytime I’m standing in front of a directory map at the mall, my eyes initially navigate that little red dot that screams, “You are here!” As a consumer, it’s important to get your bearings before navigating through endless choices. Retailers also want to know where I am – if I fit their target demographic – because they want to attract me to their stores. The ability to track a customer’s location and pinpoint opportunities to upsell or cross-sell through analytics is shown to improve customer engagement and brand loyalty. So, can location-based technology combined with embedded analytics create a perfect storm for companies that want to increase customer engagement and improve revenue opportunities? Experts like John Hadl, Venture Partner at USVP and founder of mobile advertising agency Brand in Hand, say yes. “Location is the critical component that allows marketers to spend their mobile marketing dollars effectively,” notes Hadl. Location-based technology and data analytics, used together, are often called “location intelligence.” Location intelligence improves the customer experience by allowing companies to visualize, analyze and track partnerships, sales, customers and prospects, according to research from consulting firm Pitney Bowes. Having both a person’s location and their relevant information sets the stage for some innovative approaches to customer experience. Location Intelligence Meets Business Intelligence Research shows that user interest in specific location intelligence features grew almost across the board in the last year. According to the Dresner Advisory Services’ 2015 Location Intelligence Market Study, the most important location intelligence features for users are map-based visualization of information, drill-down navigation, and maps  embedded in dashboards. “From the supplier standpoint, we see vendors having a mixed view of the significance of Location Intelligence, with an increasing number this year saying it has critical importance,” writes Howard Dresner (pictured below). “Industry support is only somewhat aligned with user priorities for Location Intelligence features and geocoding, but GIS [Geographic Information Systems] integration and mobile feature support are well aligned.” The report, which is part of Dresner’s Wisdom of Crowds survey series, also notes that sales and marketing departments are most apt to believe location intelligence will impact their jobs. Other findings include: Compared to 2014, location intelligence use is driving farther down organizational ranks. Respondents’ highest priority is the ability to map locations of province/state, country, and postal codes. Governments, along with the retail and wholesale segment, are most interested in discrete geocoding features.  One challenge for organizations that hope to take advantage of location intelligence – aside from being precise – is the ability to map location data to the data set. Embedded analytics might be the solution to this obstacle. Daily Coffee, Chock Full of Data Let’s look at how location intelligence works: In San Francisco you can’t walk more than a block or two before you hit some type of specialty coffee business  (and yes, Starbucks does qualify). But all specialty coffee is not the same, and each neighborhood, because of its population, can provide different opportunities and potentially unique user experiences. As you can see from the infographic below, analysts from Pitney Bowes using CAMEO software determined that residents in San Francisco’s Sunset District represent a cosmopolitan suburbanite who typically engage best with exotic coffee selections and ample space to turn around a stroller. Hipsters living near San Francisco’s Financial District are more likely to attend art shows or poetry readings, so they would prefer a coffee experience tailored to a mid-afternoon espresso and a late-evening mocha. A mobile app can help these users find these ideal coffee experiences. Several coffee companies have them; they use embedded GPS information to help customers find their coffee – and often, to help the coffee find the customers as well. Blending location information with demographic data helps baristas provide customers with an improved experience. For more thoughts on how embedded analytics plays a major part in location intelligence, check out our series of discussions about the new business intelligence. And, of course, subscribe to this blog for all the latest thinking about embedded analytics. (Infographic courtesy of Pitney Bowes and Cameo)

Read More

Enterprise Document Transformation: 8 Must-Have Features for Superior Performance [Part 2]

As we discussed in the previous post, if your organization regularly transforms documents and print streams, either dynamically (on-the-fly) or in large batches, you want and expect the best possible performance from your technology solution. Download a complete white paper, Enterprise Document Transformation: 8 Must-Have Features for Superior Performance, to find more information on what we will highlight further in this post. For a document transformation solution to perform at the true speed of enterprise, it should have these 8 performance-enhancing features: Multithreading Memory Optimization Resource Caching Steaming Input and Output Queuing Detailed Monitoring and Reporting Tools Flexible Tuning Controls Use of Profiling during Software Development In the last blog, we discussed the first four features. In this blog, we will discuss the last four. Queuing Document transformation software can realize significant performance gains by optimizing job queues. The software manages a queue by prioritizing jobs, balancing resources, protecting the virtual machine from crashing (by not overloading the queue), and performing related functions to make the best possible use of available computing power. High-performance document transformation software is able to get maximum document throughput by efficiently managing and prioritizing two types of queues: internal queues (within the transformation software) and external queues (from third-party software). Detailed Monitoring and Reporting Tools Document transformation software that consistently offers superior performance is able to do so because it has built-in tools for monitoring system activities and reporting on them. Among other things, these reports indicate how resources are being used and where bottlenecks are occurring, giving administrators the information they need to fine-tune the software for better performance. Flexible Tuning Controls To compensate for performance variables such as computer hardware and content type, transformation software needs to be very flexible, with many options for managing and configuring: Threads – set the number of threads to control the amount of concurrent work being performed on a given transformation. Buffer Sizes – optimize throughput. Caches – process common resources once, reuse many times. Resource Management – unload least used resources as needed to prevent memory shortages. Queues (Job Management) – control the number of concurrently running transformations. When administrators have a good understanding of how a document transformation solution is being used, they can modify the settings, test the new configuration, and if necessary, make additional adjustments until they’re satisfied with the performance. Even if conditions change significantly—for example, from dynamic to batch jobs—the software should be flexible enough that administrators can tune it to compensate for the new scenario. Use of Profiling during Software Development Document transformation software that has been profiled during the development process is more likely to perform at a high level. Using profile tools to track memory leaks and find bottlenecks, developers are able to identify and address performance issues before releasing the software to customers. Conclusion Software performance is an important thing to consider when you evaluate enterprise-grade document transformation solutions because it can have a significant impact on customer service and business efficiency. Performance-enhancing features such as multithreading and resource caching make optimal use of IT infrastructure, allowing your document transformation solution to handle document loads now and in the future, when your business has grown and document volumes have increased. How Actuate Can Help You If your current document transformation system is underperforming, contact us today. Our expert team can help you define your requirements going forward and develop an appropriate, high-performance solution that will take you where you want to go.

Read More

Under the Hood – BIRT iHub F-Type: Understanding the Primary Configuration Files

Welcome to the second installment of the Under the Hood – BIRT iHub F-Type blog series. Today, I will discuss the three primary configuration files of a BIRT iHub F-Type installation: acserverconfig.xml, acpmdconfig.xml and web.xml. In most situations in which a configuration tweak is needed, you will work with one of these three files. However, please keep in mind that this not an exhaustive list of all configuration files, and there are situations where you may need to modify a file other than these three. Before I discuss these primary configuration files, please remember to always make a backup copy before you modify a configuration file. A backup can mean the difference between the extra work of trying to restore the contents of a file and a quick rollback to the most recent working version. 1. acserverconfig.xml – This configuration file uses XML version 1.0, UTF-8 encoding, and is structured just as one would expect for a standard XML file. The root element is Config and its child elements are System, Templates, ResourceGroups, Printers, MetadataDatabases, Schemas, Volumes, FileSystems and ServerList. Of these child elements, I will discuss in detail the System, Templates, and ResourceGroups elements. In most scenarios, changes should not be required or made to the MetadataDatabases, Schemas, Volumes, FileSystems and ServerList elements. Note: Any changes made to the acserverconfig.xml requires a restart of BIRT iHub for changes to take effect. System – The majority of the attributes of the System element should not be modified from their default settings. However, there are two exceptions I would like to discuss in detail: DefaultLocale and DefaultEncoding. Depending on the region where BIRT iHub is installed and maintained, these settings may need to be modified to align with the regional or company standards. DefaultLocale follows Java standards for its naming convention, so if you want the DefaultLocale to be “English – United States,” the value should be set to “en_US”. DefaultEncoding, as its name indicates, is the default encoding type to be used and inherited on the iHub. Depending on the installation and requirements, you may need to provide a different encoding type such as “utf-8″. I would not recommend changing the default encoding unless you determine that an issue is caused by the encoding type. Templates – Four different templates are configured within this element: small, medium, large and disabled. Only one template can used at a time by BIRT iHub. By default, the small template will be selected for a BIRT iHub F-Type installation. You can determine which template is currently in use within the acpmdconfig.xml; this will be discussed later in this article. Within each template are quite a few child elements, all of which have multiple attributes. In most situations, the majority of these attributes do not need to be modified. The values which most often may need to be modified are the StartArguments for the various resource groups. These are the start arguments for the JVM that hosts the various resource groups processes, and they use the standard syntax for JVM start arguments. One of the most common changes to the start argument is an increase in the maximum heap size (-Xmx) for a particular resource group. The small template is configured with a default maximum heap size of 512 MB for all of the resource groups. Any modifications to the heap size should be done on an as-need basis and within the resource limitations of the environment. ResourceGroups – There are very few situations in which you need to modify the resource groups, but I will discuss them briefly because it’s good to understand them. There are five resource groups that correspond directly to the resource group settings in the templates, based on the name attribute. Settings changes for the resource groups will almost always take place in the ServerResourceGroupSettings within the template, and not in the ResourceGroups directly. The “Disabled” attribute is the one you may need to modify. In most situations it should be left with the default setting of “false” so the resource group is enabled. However, if a particular resource group is not used or you want it to not be available, this attribute should be changed to “true”. For instance, you want to prevent background jobs from running and you want to disable scheduling, set the “Disabled” attribute in the BIRT Factory resource group to “true”. 2. acpmdconfig.xml – This configuration file contains settings for the shared environment variables, internal SSL settings, SOAP endpoint information, disk thresholds, the embedded Tomcat, ihubc and iHub processes. It also controls the capacity template selected from acserverconfig.xml. In this post I will focus on the environment variables, embedded Tomcat, some of the processes and the disk threshold settings. As with the acserverconfig.xml, any changes made to the acpmdconfig.xml required a restart of the iHub to take effect, and a backup should always be made before modifying a configuration file. EnvironmentVariables – As the name implies, this element contains the various environment variables that are shared with all BIRT iHub processes. Of all the environment variables, the three that I highly recommend memorizing are the AC_SERVER_HOME,  AC_DATA_HOME, and AC_CONFIG_HOME. AC_SERVER_HOME – As the name describes, this environment variable points to the home installation directory of BIRT iHub. That directory contains all of the various sub-directories, including bin, data, etc, jar, shared and tools. AC_DATA_HOME – This is where the majority of BIRT iHub log files will be written, so if you need to locate a log file, this is where to start. AC_CONFIG_HOME – This points to the location where the acserverconfig.xml is stored, as well as some of the files used for email notifications.  Embedded Tomcat – As discussed in my previous blog post, the process name associated with the embedded Tomcat is ihubservletcontainer. Of its various attributes and child elements, the ones that most often need modification are the CmdArguments and log settings. As with the resource group settings in acserverconfig, the CmdArguments are the arguments passed to the JVM that hosts tomcat. It accepts standard JVM arguments and syntax. The log directory and log output pattern can also be changed by modifying the LogDir and LogPattern attributes respectively. iHub Process – Although no modifications  to the settings for the iHub processes are required in most cases, it is good to know where to go if you decide that a change is required. The most likely changes relate to start arguments for the JVM, such as increasing the maximum heap size. As with the embedded Tomcat, the CmdArguments attribute contains the JVM start arguments and follow standard syntax. ihubc Process – The one main takeaway I want to emphasize about the ihubc process is that the value specified for type is “native” – not “java” like the iHub process. This is because ihubc is a native binary complied for a specific operating system type, not a Java process, so it does not take Java start arguments. You shouldn’t need to make changes to the configuration of this processes. BIRT Service – The one property to remember about this setting is “CapacityTemplate”. As indicated earlier, acpmdconfig.xml determines which template within acserverconfig.xml is in use, and the value specified for this property is the template that will be used. By default in a BIRT iHub F-Type installation, the value specified will be “small” to use the small template. If you change this value, you must match exactly the name attribute of a template within acserverconfig.xml. Disk Thresholds – There are two elements related to disk thresholds within acpmdconfig.xml: “LowDiskThreshold” and “MinDiskThreshold”. Although these values do not typically need to be modified, it is important to understand what they mean. LowDiskThreshold – Specified in MB, this value indicates the amount of free disk space remaining when BIRT iHub starts sending low disk space warnings to the log files. The out of the box value is set at 300, meaning that if free space drive where BIRT iHub is installed and stores it data reaches 300MB or less, warning messages will begin to appear in the logs. MinDiskThreshold – Also specified in MB, this value indicates the free disk space value at which BIRT iHub shuts itself down. As with LowDiskThreshold, warnings will show up in the log files when this threshold has been reached and the iHub is shutting itself down. 3. web.xml – If you have worked with a J2EE deployment before, you will already be familiar with a web.xml file. Within BIRT iHub, the web.xml file for the embedded Tomcat used in the front-end iPortal is located at “AC_SERVER_HOME/web/iportal/WEB-INF/”. A majority of the properties in web.xml are detailed comments about the property itself and the value specified. I will not go into depth on the properties or values within the web.xml because there are hundreds them. Instead, I want to draw your attention to the existence of this file, and tell you that it contains the servlet configuration settings for the front-end embedded Tomcat that hosts the Actuate Information Console (also known as iPortal). You may have noticed that, in the majority of situations, these configuration files should not need to be modified. Please remember this before modifying the configuration files – particularly in situations where you are unsure of the specifics of the settings or value that is being changed. When in doubt, ask a question before modifying a primary configuration file. The developer forums are the perfect place to ask a question if you are unsure. Changing values when you do not understand their setting or meaning can result in unexpected behavior that may not surface immediately. Whenever modifications are made, I highly recommend adding comments or creating a log with details on all modifications. And one last time, I would like to  emphasize the importance of making a backup of a configuration file before you modify it. If something goes wrong with a configuration file, the quickest and easiest way to restore BIRT iHub to a fully operational state is to restore a configuration file that is known to be good. Thank you for reading this installment of Under the Hood. If you have questions, post them in the comments below or in the BIRT iHub F-Type forum. The other topics in this series are listed below, and the image below that links to the download page for BIRT iHub F-Type. -Jesse Other topics in the Under the Hood series: Understanding the Processes

Read More

Extend Interactive Viewer with Table-Wide Search [Code]

Sixth, and final, in a series of blog posts about free extensions to OpenText Interactive Viewer. Do your users ever need to filter across multiple columns in an iHub table? It can be painful for a user to filter a table column by column when the term they’re filtering on appears in several columns. In this situation, users want a way to filter the entire table at one time. This post shows how to create a table-wide search box in a table header, like the one shown above. Our table-wide search is flexible: The user can make the search case sensitive with a simple click of a check box, and clear the filters by searching for nothing.  Just to the right of the search box, we’ve added a <div> that helps the user know what searches have already been applied to the table (as seen below), making it easier to apply multiple filters on top of each other. We perform table-wide searches by iterating through the resultset and, if we find a match in the columns that we are searching, adding the row’s unique identifier (we call this a RowID) to an array, and finally returning all of the columns identified in the array. The steps for creating a table-wide search box are: Add one text item for the search box and another text item for the list of search terms in the header of the table. Depending on the widths of the columns in the tables, you may want to merge several cells for these text items. Change the type of each text item to “HTML”. Copy the <input> HTML elements below into the leftmost text item. The screenshot below the block of code shows how it should look. Search all columns: <input type="text" id="searchString"><br><input type="checkbox" id="ignoreCase" checked="true"> Ignore case<input type="submit" value="Search" onclick='javascript:searchTable()'> Copy the <div> HTML element below into the rightmost test item. The screenshot below the block of code shows how it should look. <div id="searchTerms">yep</div> In clientScriptsonContextUpdate, paste the code found at the end of this document into the window. (You can download the code in a Text File, or download the Report Design.) You can find clientScriptsonContentUpdate by clicking on an empty portion of the Layout Manager, clicking on the Script tab, and selecting clientScripts in the first pulldown and onContentUpdate in the second. It will look like this: Create a column in your dataset called “ROWID” and make it the first column in your table. Make this column Hidden via the Property Editor.  In the dataset, only the column “ROWID” must be unique; you don’t necessarily need to use the SQL ROWID (which your table may not even have).  In our example, ROWID is the following: “SELECT ROW_NUMBER() OVER () AS ROWID, COUNTRY, …”. Test your report in OpenText Analytics Designer. Troubleshooting If your embed code is not working, try debugging in Chrome. If you add “debugger” in your JavaScript, Chrome will break at that point when Chrome tools debugger is open. Conclusion We can make it easy for a user to find the information that he or she wants by simply adding a couple of HTML input items and a <div> to the table header and including a small amount of JavaScript. We hope you’ve found this series of extension tips for iHub Interactive Viewer helpful. Please Subscribe (at upper right) to be notified when they are posted, and let us know in the comments what other extensions and functionality you’d like to see. Previous blog posts in this series: 1. Extend Interactive Viewer with Row Highlighting 2. Extend Interactive Viewer with a Pop-Up Dialog Box  3. Extend iHub Reports and Dashboards with Font Symbols  4. Extend iHub Dashboards with Disqus Discussion Boards 5. Extend iHub Interactive Viewer with Fast Filters    Full Table Search JavaScript Code var columns = new Array(); window.myViewerId = this.id; this.createFastFilters = function () { debugger; // First create the list of filter terms var searchTerms = sessionStorage["searchTerms"]; var termDiv = document.getElementById("searchTerms"); termDiv.innerHTML = "createFastFilters"; if (searchTerms != "" && searchTerms != 'undefined' && searchTerms != null) { termDiv.innerHTML = "Search Terms: " + searchTerms; } else { termDiv.innerHTML = "No Search Terms"; } var table = this.getViewer().getTable(); var request = new actuate.data.Request(table.getBookmark(), 0, 100); request.setMaxRows(0); } window.searchTable = function() { debugger; var table = actuate.getViewer(myViewerId).getTable(); var elem = document.getElementById("searchString"); var searchTerm = elem.value; if (searchTerm == "") { sessionStorage["searchTerms"] = ""; table.clearFilters("ROWID"); table.submit(); return; } if (sessionStorage["searchTerms"] == "" || sessionStorage["searchTerms"] == 'undefined' || sessionStorage["searchTerms"] == null) { sessionStorage["searchTerms"] = searchTerm; } else { sessionStorage["searchTerms"] = sessionStorage["searchTerms"] + " && " + searchTerm; } var request = new actuate.data.Request(table.getBookmark(), 0, 100); request.setMaxRows(0); request.setColumns(columns); actuate.getViewer(myViewerId).downloadResultSet(request,window.searchColumns); //alert('Hello!! ' + searchTerm); } window.searchColumns = function(resultSet) { var elem = document.getElementById("searchString"); if (document.getElementById("ignoreCase").checked == true) var searchTerm = new RegExp(elem.value, "i"); else var searchTerm = new RegExp(elem.value); var columnIndex = 1; var i = 0, rowidIndex = 0; var rowIds = new Array(); var resultColumns = resultSet.getColumnNames(); for (columnIndex = 0; columnIndex < resultColumns.length; columnIndex++) { if (resultColumns[columnIndex] == "ROWID") { rowidIndex = columnIndex; break; } } if (searchTerm.length < 1) { return; } while (resultSet.next()) { for (columnIndex = 0; columnIndex < resultColumns.length; columnIndex++) { if (resultSet.getValue(columnIndex+1).search(searchTerm) != -1) { //alert("Found in: " + resultSet.getValue(columnIndex+1)); rowIds[i] = resultSet.getValue(rowidIndex+1); i++; continue; } } } debugger; // Found at least one, so create filter if (i > 0) { var table = actuate.getViewer(myViewerId).getTable(); var filter = new actuate.data.Filter("ROWID", actuate.data.Filter.IN, rowIds); table.setFilters(filter); table.submit(); } } this.createFastFilters();

Read More

Data Driven Digest for January 30

Each Friday we share some favorite reporting on, and examples of, data driven visualizations and embedded analytics that came onto our radar in the past week. Use the “Subscribe” link at left and we’ll email you with new entries. Pop Quiz: I heard on the radio yesterday morning that the population of India will surpass that of China in the next 15 years. Soon thereafter I learned about the map above (click through for a high-resolution version), created by a Redditor called Tea Dranks and shared widely on the website of The Independent. The map graphs the world’s population visually by country, with each small square representing 500,000 people. (That’s why, at first glance, the map looks pixelated.) While that scale generally “works” for South America, Europe, and much of Africa – where population and geographic area run roughly in parallel – it creates strange effects in other parts of the world, and makes the burgeoning populations of India and China more clearly visible.     State of Play: Sunday’s Super Bowl is a true bicoastal affair, pitting the west coast’s Seattle Seahawks against the east coast’s New England Patriots. But does fan support hew to the west/east split? Website marketing firm AddThis created the map above showing which team the residents of each state support in the game. Click the map for the full infographic, which has other stats too. We like the concept of the map, but it begs for interactivity – so we’re less pleased that the data and algorithm are hidden from users. What do you think?       Commercial Time: The team at media intelligence firm Kantar Media has created a Super Bowl-related data visualization that’s more satisfying. It visualizes the other big game on Sunday – no, not the Puppy Bowl, but the advertising game. Kantar’s interactive chart (be sure to click through for the full version) tracks ad spending by sector since 1995. Autos have come on strong in recent years, as you can see in the snippet above. You can click each individual year to see spending by individual brands, or each sector to see the spending trend. Nicely done. Do you have a favorite or trending resource on embedded analytics and data visualization? Share it with the readers of the Actuate blog. Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: January 23: SOTU tweets, Moore’s Law, Big Data roles January 16: Tallest buildings, Ohio State’s Elo rating, airport efficiency January 9: Global education, rainfall animation, cutting-edge visualizations

Read More

Enterprise Document Transformation: 8 Must-Have Features for Superior Performance [Part 1]

Organizations that issue high-volume customer communications such as credit card bills, bank statements, insurance policies, and telephone bills must have the ability to print this content, present it online, and efficiently store it. Document Transformation solutions make it possible for organizations to convert documents from one format to another, as required. If your organization regularly transforms documents and print streams, either dynamically (on-the-fly) or in large batches, you want and expect the best possible performance from a technology solution. Download a complete white paper, Enterprise Document Transformation: 8 Must-Have Features for Superior Performance, to find out more about the success factors we are highlighting in this blog post. Software performance can have a significant impact on your business, affecting your ability to meet service level agreements (with your internal and external stakeholders), use resources efficiently, handle business spikes, and adapt to business growth. Of course, not all document transformation solutions are created equal. Some consistently perform better than others, so it’s important to know what variables can affect performance and become acquainted with some of the performance-enhancing features found in state-of-the-art systems. Performance Variables The performance of a document transformation solution depends on several factors: Systems architecture Complexity of input and output print streams How and where print stream resources are stored Specific application requirements Use of correlated fonts versus rasterization To make optimal use of existing IT infrastructure and achieve top performance, document transformation solutions must have sophisticated design features for controlling the major performance variables. Must-Have Performance Features For a document transformation system to perform at the true speed of enterprise, it should have these 8 performance-enhancing features: Multithreading Memory Optimization Resource Caching Steaming Input and Output Queuing Detailed Monitoring and Reporting Tools Flexible Tuning Controls Use of Profiling during Software Development We will discuss the first four features in this blog, and the later four will be discussed in a future blog post. Multithreading Transformation software can process documents significantly faster if it has been designed to harness the power of parallel processing. A high-performance system makes optimal use of computing resources by breaking up each document into numerous parts, assigning the individual parts to multiple threads for transformation, and then reassembling the document afterward when processing has been completed. The number of threads can be optimized to suit the type of documents being transformed and the number of CPUs available. Top performing transformation software employs proven threading algorithms that have been subjected to years of testing and refinement in real-world conditions at diverse customer sites. Memory Optimization The performance of a transformation solution often depends on how well it manages memory. High-performance systems bring as much data as possible into memory for the current job, minimizing input and output (I/O) and cleaning up bottlenecks by reusing the stored data on subsequent steps or jobs. This can result in significant processing and storage efficiencies. Of course, when a system is required to transform a mixture of many different document types, there is less reusable data and the system runs low on memory more often, slowing down the transformation process. Some systems achieve performance gains with soft caching, strategically placing the most used resources in memory. Other document transformation software use cache prioritization algorithms to shuffle the most recently used data into a protected area while discarding everything else to free up space. Dynamic and batch processes require different algorithms for efficient memory management and prioritization. Resource Caching Print stream formats such as AFP are designed to save storage space by bundling resources that are common to large numbers of documents. High-performance document transformation software takes advantage of this feature by retrieving bundled resources and storing them in memory, thus minimizing the number of input and output transactions. The software parses each common resource only once and reuses it on multiple jobs. For example, the software could retrieve a company logo, hold it in memory, and use it again and again while transforming thousands of customer invoices. Streaming Input and Output To transform batches of documents as quickly and efficiently as possible, high-performance solutions are designed to ensure that bottlenecks do not occur in the physical input and output processing systems. Rather than reading the input stream in a traditional manner, high-performance software reads several blocks of data into memory. The software processes one block at a time while simultaneously reading the next block into the buffer, thus minimizing disk movement, reducing I/O time, and improving processing throughput. High-performance document transformation software also uses this technique to efficiently transfer output documents to their final destination on the network. The output data is written out to a staging area in memory until enough data is collected to write it out to disk, which reduces the amount of costly writing operations that are performed. More Next Time In the next blog post, we will explore 4 additional must-have features for superior document transformation performance. In the meantime, be sure to send me a note if you require more information (scastrucci@actuate.com) or download a complete white paper, Enterprise Document Transformation: 8 Must-Have Features for Superior Performance.

Read More

OpenText Enhances Portfolio with Analytic Capabilities

    By Mark Barrenechea, President and Chief Executive Officer, OpenText Analytics are a hot technology today, and it is easy to see why. They have the power to transform facts into strategic insights that deliver intelligence “in the moment” for profound impact. Think “Moneyball” and the Oakland A’s in 2002, when Billy Bean hired a number-crunching statistician to examine their odds and changed the game of baseball forever. Across the board—from sports analysis to recommending friends to finding the best place to eat steak in town, analytics are replacing intelligence reports with algorithms that can predict behavior and make decisions. It can create that 1 percent advantage that creates the 100 percent difference between winning and losing. Analytics represent the next frontier in deriving value from information, which is why I’m pleased to announce that OpenText has recently acquired Actuate to enhance its portfolio of products. With powerful predictive analytics technology, Actuate complements our existing information management and B2B integration offerings by allowing organizations to analyze and visualize a broad range of structured, semi-structured, and unstructured data. In a recent study, 96 percent of organizations surveyed felt that analytics will become increasingly important to their organizations in the next three years. From a business perspective, analytics offer customers increased business process efficiencies, greater brand experience, and additional personalized insight for better and faster decisions. In a Digital-First World, organizations will tap into sophisticated analytics techniques to identify their best customers, accelerate product innovation, optimize supply chains, and identify the drivers of financial performance. Agile enterprises incorporate consumer and market data into decision making. People are empowered when they have easy access to agile, flexible, and responsive analytical tools and applications. Actuate enables developers to easily create business applications that leverage information about users, processes, and transactions generated by the various OpenText EIM suites. Customers will be able to view analytics for the entire EIM suite based on a common platform to reduce their total cost of ownership and get a comprehensive view for more elevated, strategic business insight. Actuate is the founder of the popular open source integrated development environment (IDE), BIRT, and develops the world-class deployment platform, BIRT iHub™. BIRT iHub™ significantly improves the productivity of developers working on customer-facing applications. More than 3.5 million BIRT developers and OEMs use Actuate to build scalable, secure solutions that deliver personalized analytics and insights to more than 200 million customers, partners and employees. Designed to be embeddable, developers can use the platform to enrich nearly any application. And, these analytics-enriched applications can be delivered on premises, in the cloud, or in any hybrid scenario. We are excited to welcome the Actuate team into the OpenText family as we continue to help drive innovation and offer the most complete EIM solution in the market. Read the press release on the acquisition here.

Read More

Analytics – Improve Your Customer Communications

Financial services, retail, utility and government organizations are all looking for an edge in communicating with their customers. Customers today get plenty of communications, and the trick to engaging them with your brand and your products is to cut through the clutter and be noticed. Using analytics to target your customer communications is a key way to do this. Customer communications include statements, correspondence, bills, invoices and even advertising or flyers. Customer communications can be delivered through multiple channels, and in multiple formats – on the web, in emails, on smart phones and tablets and in paper format. The key is to personalize and target your messages to your customers. This will help your communications be noticed and perhaps even acted upon. In this blog post, we will look at how to use predictive analytics to improve your customer communications. In the next post, we will take a look at how data visualizations and reporting in customer communications can improve the experience for your customers. Using Predictive Analytics Predictive analytics takes mounds of data, reveals trends, and transforms data into information. This information can come from a number of sources that include the user’s web or physical navigation patterns, their buying history, reaction to the previously presented offers, and relevant demographic information. The information from these sources can be used to target customers through physical or electronic communications with them. Examples of such targeting include: Placing a personalized ad on a banking statement that provides specific information on products and services for a specific customer, based on their demographic, family, financial and risk profiles. Using segmentation to develop targeted emails for different consumers, based on their historic buying patterns (physical or electronic in origin) with a retail organization. Analyzing engagement history and using this information to determine a course of communications with customers who appear to be disengaging from the brand and may eventually move away from the brand. Determining what the next purchase is for an individual, based on past history, and presenting an offer or coupon to the customer to encourage sales and loyalty. With relevant communications, customers do not feel their time is wasted; they will develop a positive image of your brand, and they will continue to engage with it.

Read More

Data Driven Digest for January 23

Each Friday we share some favorite reporting on, and examples of, data driven visualizations and embedded analytics that came onto our radar in the past week. Use the “Subscribe” link at left and we’ll email you with new entries.   State the Obvious: It’s not just your kids who can’t put down their phones. Members of Congress were busily tweeting during Tuesday night’s State of the Union address. And although the two parties can’t agree on much, they were pretty well synchronized – united, you might say – on when to tweet. In the bar chart above, The New York Times visualized the number of tweets by Democrats and Republicans in five-minute increments throughout the speech. The accompanying article gives details on the process and more analysis of the results. Law of the Land: You’ve probably heard of Moore’s Law, which states that the number of transistors in an integrated circuit doubles approximately every two years.  Rather than take Moore’s Law as an obvious truth, Plotly published a clever blog with an interactive graph (click through the static version above) that proves it. The same blog post also visualizes Zipf’s Law (dealing with word use), Benford’s Law (which explores number frequency) and Hubble’s Law (which is related to the Doppler Effect). Each chart is interactive and has an enticing “Play with this data!” link for further exploration.   Chop chop: Martyn Jones published an article on LinkedIn late last year that outlined 7 New Big Data Roles for 2015 as he saw them. The article generated some buzz in Big Data circles, perhaps in part because of the fanciful titles that Jones suggested. He listed ten roles in all (in spite of the article’s title) including Data Trader, Data Taster, and Data Czar. But my favorite was the Data Butcher, who “removes the fat data from the lean data, and provides quality data that can then be subsequently ‘sliced, diced and spiced’ in downstream analytics applications.” The description was accompanied by the image above. (There’s Fred’s data, right near the tail.) Do you have a favorite or trending resource on embedded analytics and data visualization? Share it with the readers of the Actuate blog. Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: January 16: Tallest buildings, Ohio State’s Elo rating, airport efficiency January 9: Global education, rainfall animation, cutting-edge visualizations January 2: New Year’s Eve, the news in Tweets, nasty flu season  

Read More

Under the Hood – BIRT iHub F-Type: Understanding the Processes

While your customers don’t need to see the inner workings of your app, as a developer, you need to be the master of its parts and processes. It’s time to get under the hood. Hello BIRT community! My name is Jesse Freeman. Although I am not new to BIRT or Actuate, I am transitioning into a significantly more community-centric role. I have spent the last two years working as a Customer Support Engineer for Actuate, specializing in the server and designer products. I am excited to bring my product and support knowledge to the larger BIRT community. I come from a Java/JavaScript background and am a big fan of multi-platform, open source and open standard technologies. I am an advocate of Linux operating systems and have used or dabbled with the majority of the larger Linux distributions. In particular, I am a big fan of Arch Linux and CentOS. Over the next several weeks I will publish a series of blogs that will bring my support knowledge to the community. The series will include posts on understanding the BIRT iHub F-Type’s processes and configuration as well as troubleshooting. This series will provide technical insight for anybody who will be configuring and/or maintaining a BIRT iHub F-Type installation. BIRT iHub F-Type is a free BIRT server released by Actuate. It incorporates virtually all the functionality of commercially available BIRT iHub and is limited only by the capacity of output it can deliver on a daily basis, making it ideal for departmental and smaller scale applications. When BIRT iHub F-Type reaches its maximum output capacity, additional capacity is available as an in-app purchase. Understanding the Processes The first topic of my Under the Hood blog series is titled Understanding the Processes. When I first started in support, one of the first pieces of information I learned was the breakdown of all of the processes and their specific roles. This information was invaluable for the duration of my time providing support. Understanding the processes and their responsibilities provides insight into how the product works for configuration and integration purposes, and helps us understand where to look for more information if an issue arises. With that in mind, here is the list of the BIRT iHub F-Type processes and their responsibilities: ihubd – This is the daemon process responsible for the initial startup of BIRT iHub F-Type. The ihubd process starts the ihubc and ihubservletcontainer processes.  If issues occur during startup, this is one of the first processes to examine. ihubservletcontainer – As the name implies, this process is the front end servlet container for the BIRT iHub F-Type. This process is hosted out of an integrated Tomcat within BIRT iHub, which means anybody who is familiar with Tomcat should feel right at home when configuring or troubleshooting of the process. ihubc – This is the parent of all other processes started by BIRT iHub,  including the ihub, jsrvrihub and jfctsrvrihub processes.  The ihubc is the SOAP endpoint for BIRT iHub’s communication, the job dispatcher, and resource group manager, and also takes requests from front-end applications such as the integrated Information Console. ihub – The ihub process is responsible for communication with the metadata database as well as the Report Server Security Extension (RSSE) if one has been implemented. jsrvrihub – Within a single installation there may be multiple jsrvrihub processes running simultaneously. A typical out-of-the-box installation will have at least two. One of these two typical jsrvrihub processes is used for the viewing of dashboards and the other is used for execution and viewing of reports transiently. jfctsrvrihub – The jfcsrvrihub process is used for the execution of background jobs on BIRT iHub. This includes any report that is explicitly scheduled to run at a specific time (or immediately) and allows reports to be output to a directory within the ihub process rather than viewed immediately within the current browser session. Whether beginning an installation, working on an integration project, or troubleshooting an existing installation,  this information will assist you with knowing the process that needs to be examined. Thank you for reading.  Subscribe to this blog and you will be first to know when I publish my next Under the Hood – BIRT iHub F-Type series with a review of the Primary Configuration Files. Download BIRT iHub F-Type today so you can follow along. If you have any questions, post them in the comments below or in the BIRT iHub F-Type forum. -Jesse

Read More

Extend iHub Interactive Viewer with Fast Filters [Code]

Fifth in a series of blog posts about free extensions to OpenText Interactive Viewer.   Do you want to enable your users to filter a table in an iHub report with just a few clicks?  If a column in a table has just a few discrete values, you can make column-based filtering easy using a technique we call Fast Filter. This post shows how to create a Fast Filter – in short, a selectable drop-down menu of distinct values that appears in the header of a column. (The screenshot above shows how a Fast Filter looks for users.) Users can combine multiple Fast Filters to find the data that they are looking for, because when a user Fast Filters values in one column, the rest of the columns only display values according to their own filters. With Fast Filters, app users don’t have to waste time filtering columns individually or otherwise fine-tuning the data in a table. The steps for creating Fast Filters are: Add a text item to the header of each column that you want to filter on. Change the type of each text item to “HTML”. Copy the following HTML code into each text element. <select id='{COLUMN}_FILTER' onchange='javascript:filterColumn("{COLUMN}")'     style="width:100px;"> </select> Here’s how this looks on screen: In the HTML code for each column header, replace “{COLUMN}” with the dataset column name. In clientscriptsonContextUpdate, paste the code found at the end of this document as shown below. (You can download the code in a text file, or download the Report Design.) You can find clientscriptsOnContentUpdate by clicking on an empty portion of the Layout Manager, clicking on the Script tab, and selecting clientScripts in the first pulldown and onContentUpdate in the second. It will look like this: Add every column that you created a Fast Filter HTML item for to the column list. In the example above,  we are not enabling Fast Filter for the “TOTALREVENUE” column, so our column list is: columns[0] = "COUNTRY"; columns[1] = "PRODUCTVENDOR"; columns[2] = "PRODUCTLINE"; columns[3] = "PRODUCTNAME"; columns[4] = "REVENUEYEAR"; Test your report in OpenText™ Analytics Designer. Troubleshooting If your embed code is not working, try debugging in Chrome. If you add “debugger” in your JavaScript, Chrome will break at that point when Chrome tools debugger is open. Conclusion We can make it easy for users to find the information that they want by simply adding a drop-down to the column header and including a small amount of JavaScript. The next (and final) extension blog entry will demonstrate how to use JSAPI to search multiple columns in a table at the same time. Links to other blog posts in this series: 1. Extend Interactive Viewer with Row Highlighting 2. Extend Interactive Viewer with a Pop-Up Dialog Box  3. Extend iHub Reports and Dashboards with Font Symbols 4. Extend iHub Dashboards with Disqus Discussion Boards 6. Extend Interactive Viewer with Table-Wide Search     Fast Filter Javascript Code var columns = new Array();   this.createFastFilters = function () { debugger; columns[0] = "COUNTRY"; columns[1] = "PRODUCTVENDOR"; columns[2] = "PRODUCTLINE"; columns[3] = "PRODUCTNAME"; //columns[5] = "TOTALREVENUE"; columns[4] = "REVENUEYEAR"; // Initialize the state of each column filter to not filtered. for (var i = 0; i < columns.length; i++) {   if (sessionStorage[columns[i] + ".state"] == null) { sessionStorage[columns[i] + ".state"] = "Not Filtered"; }   } var table = this.getViewer().getTable(); var request = new actuate.data.Request(table.getBookmark(), 0, 100); request.setMaxRows(0); request.setColumns(columns); this.getViewer().downloadResultSet(request,this.addOptions);   }   this.addOptions = function(resultSet) { debugger;   var columnIndex = 0;     var colValue; var i; var found; // Load unique values into dropdowns while (resultSet.next()) { for (columnIndex = 0; columnIndex < columns.length; columnIndex++) { found = false; elem = document.getElementById(columns[columnIndex] + "_FILTER"); if (elem == null)    continue;                                           // Pass on column if not filterable colValue = resultSet.getValue(columnIndex+1) != null ? resultSet.getValue(columnIndex+1) : "-- No Value --";   for (i=0; i<elem.length; i++) {                                       // See if we already put it in the list if (elem.options[i].text == colValue) { found = true; break; } } if (found) continue;   var option=document.createElement("option"); option.text=colValue; try { // for IE earlier than version 8 elem.add(option,elem.options[null]); } catch (e){ elem.add(option,null); } }   } // If the column has too many values or is a number, consolidate.     // Now sort all the column filters and add the top level options for (columnIndex=0; columnIndex < columns.length; columnIndex++) { elem = document.getElementById(columns[columnIndex] + "_FILTER"); if (elem == null)    continue;                                           // Pass on column if not filterable $("#" + columns[columnIndex] + "_FILTER").html($("#" + columns[columnIndex] + "_FILTER option").sort(function(x, y) { return $(x).text() < $(y).text() ? -1 : 1; })) debugger; var option=document.createElement("option"); option.text="<Clear Filter>"; try { // for IE earlier than version 8 elem.add(option,elem.options[0]); } catch (e){ elem.add(option,0); }   option=document.createElement("option"); option.text = sessionStorage[columns[columnIndex] + ".state"];   try { // for IE earlier than version 8 elem.add(option,elem.options[0]); } catch (e){ elem.add(option,0); }   elem.selectedIndex = 0;   }   }   window.myViewerId = this.id; window.filterColumn = function(columnName) {   ddId = columnName + "_FILTER"; var elem = document.getElementById(ddId); var strValue = elem.options[elem.selectedIndex].value; var table = actuate.getViewer(myViewerId).getTable();   debugger; if (strValue=="<Clear Filter>") { table.clearFilters(columnName); sessionStorage[columnName + ".state"] = "Not Filtered"; } else if (strValue=="< Top 5 >") { var filter = new actuate.data.Filter(columnName, actuate.data.Filter.TOP_N, 5); var myVal = filter.getValues(); table.setFilters(filter); sessionStorage[columnName + ".state"] = "Filtered"; elem.selectedIndex = 0; } else if (columnName == "TOTALREVENUE") { var filter = new actuate.data.Filter(columnName, actuate.data.Filter.GREATER_THAN, strValue); var myVal = filter.getValues(); table.setFilters(filter); sessionStorage[columnName + ".state"] = "Filtered"; elem.selectedIndex = 0;   } else { var filter = new actuate.data.Filter(columnName, actuate.data.Filter.EQ, strValue); var myVal = filter.getValues(); table.setFilters(filter); sessionStorage[columnName + ".state"] = "Filtered"; elem.selectedIndex = 0; } table.submit();   }   this.createFastFilters();

Read More

Expert Advice on Embedded BI with Howard Dresner [Webinar]

For once, your CEO and CIO agree on something: Your company needs to embed analytics into its applications. You’ve been tasked with researching which platform is best for you, and you probably have two items on your to-do list: Learn from an industry expert who thoroughly studies the many different embedded analytics platforms, and hear from a company that has successfully embedded analytics into its software. You can do both on January 22 by attending  Embedded BI Market Study with Howard Dresner, a free webinar sponsored by Actuate. Dresner, you probably know, is Chief Research Officer of Dresner Advisory Services, a respected technology analyst firm. Dresner (@howarddresner) coined the term “business intelligence” in 1989 and has studied the market drivers, technologies, and companies associated with BI and analytics ever since. It’s safe to say that nobody knows the sector better. In this webinar, Dresner will highlight the results of his recent Wisdom of Crowds report, the Embedded Business Intelligence Market Study, published in October 2014. Dresner’s study taps the expertise of some 2,500 organizations that use BI tools, focusing specifically on their efforts to embed analytics in other applications. In the webinar, Dresner will cover three main subjects: User intentions for – and perceptions of – embedded analytics, segmented by industry, types of users, architecture and vendor Architecture needs and priorities (such as web services, HTML/iFrame and Javascript API) for embedding, as identified by technologists who implement embedded analytics Ratings of 24 embedded BI vendors, based on both the architecture and features the individual vendors offer, and the reasons Actuate garnered the top ranking To add the user’s perspective, Dresner will then give the floor to Kevin Larnach, Executive Vice President of Operations at Elcom. Larnach will explain how Elcom embeds Actuate’s reporting solution in PECOS, its cloud-based e-procurement solution. Embedded analytics enables users of PECOS – a user base 120,000 strong, in more than 200 organizations, managing nearly $20 billion in total procurement spending annually – to access standard reports, slice and dice data for analysis, create custom reports and presentations of the data, and export transaction history to many different formats, all without IT expertise.  As this diagram shows, PECOS touches all aspects of the procurement process. PECOS users include the Scottish Government (including health services, universities and colleges, and government departments), several health services groups in Britain, the Northern Ireland Assembly, several school districts in the United States, the Tennessee Valley Authority (TVA), and many other organizations and companies. Elcom has identified over a billion dollars in audited savings that its customers have accrued thanks to embedded analytics – more than $500 million in the healthcare sector alone. Elcom’s application is truly an embedded analytics success story. The embedded analytics capability in PECOS, delivered with Actuate technology, is an important competitive differentiator for Elcom. Its competitors’ products either have limited fixed reporting, or don’t offer any standard reporting at all. Those competitors “are scrambling to adopt a flexible embedded approach such as the one enjoyed by PECOS users,” Elcom says. You’re sure to have questions for Dresner and Larnach, so the webinar will include a Q&A session. (An Actuate technical expert will also be on hand if you have specific questions about our embedded analytics capabilities.) The webinar will be accompanied by live Tweets using the hashtag #embeddedanalytics.  Register today.

Read More

Data Driven Digest for January 16

Each Friday we share some favorite reporting on, and examples of, data driven visualizations and embedded analytics that came onto our radar in the past week. Use the “Subscribe” link at left and we’ll email you with new entries.   Tall Order: 2014 was a big year for big buildings. The Council on Tall Buildings and Urban Habitat (who knew there was such a council?) reported that 97 buildings of 200 meters or taller were completed last year, and the Washington Post turned their data into the cool chart – part bar graph, part illustration – you see above. In short (pun intended), it shows the top 20 as if they were side by side, making for easy height comparison. If anybody needs convincing of China’s construction boom, tell them that half of the 20 tallest skyscrapers raised in 2014 were built in that country.   Pigskin Plot: On Monday night, Ohio State beat University of Oregon 42-20 in the first-ever College Football Playoff National Championship. (A mouthful, but that was the game’s official name.) The win capped off a remarkable season for the Buckeyes, according to the sports data nerds at FiveThirtyEight. Andrew Flowers calculated that Ohio State had the second-highest season ending Elo rating ever (31.7) and found that its rating improved dramatically over the course of the season. Appropriately enough, a plot of team Elo changes (shown above) is shaped like a football.   Project Runway: How efficient is your local airport? The Economist created the chart above showing the world’s 15 busiest airports by passenger volume. The chart is a model of efficiency; along with showing each airport’s passenger volume (the red bar), it shows each one’s size (both as an illustration and in number of square kilometers), along with the number of runways and terminals, present and planned. Comparisons are easy: Denver, huge in size, is dwarfed by Atlanta, which handles almost twice the number of passengers in one-sixth the area. Do you have a favorite or trending resource on embedded analytics and data visualization? Share it with the readers of the Actuate blog. Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: January 9: Global education, rainfall animation, cutting-edge visualizations January 2: New Year’s Eve, the news in Tweets, nasty flu season December 26: Dudes and bros, football on social media, mapping pictures

Read More

Data Driven Summit – Customers Rave Over Embedded Analytics [Video]

Once in a while a term comes along that encapsulates the spirit of technology so well that it is broadly adopted by business leaders and analysts. Some examples that come to mind include the Cloud (to describe the broader Internet), eCommerce (to describe online business), Big Data (to describe volume, variety and velocity of information) and app (to describe a software application, typically for a mobile device). Recently, the term embedded analytics has resonated with people and organizations that track the next big thing. In a recent article on AllThingsD, Gene Frantz, Principal Fellow at Texas Instruments, gave credibility to the term, noting that “embedded analytics involves gathering data from sensors, processing it in real time, using algorithms to make conclusions and then initiating action.” We at Actuate knew early on that business intelligence was only the tip of the iceberg when it comes to extracting value from business data. The real value lay in the ability to embed analytics in other applications and their business processes. As the data science industry grew over the last 30 years, the platforms and measurement tools to derive context from information shifted. Business leaders today constantly scream out for data to be ubiquitous in any location and on any device. And because Actuate has tracked this trend for years, our products were ready when the concept of embedded analytics became mainstream. Actuate was ranked as the No. 1 vendor in the “Dresner 2014 Embedded Business Intelligence Study.” The study is the latest in Dresner’s “Wisdom of Crowds” series of market insights. (Feel free to browse through the study results yourself) During Data Driven Summit 2014 – Actuate’s annual series of customer events – we heard about the importance of Embedded Analytics from our customers and industry analysts in seven different global centers. Here’s what customers said about embedded analytics at the Data Driven Summit in Santa Clara, Calif. and New York, NY. We’ll be posting more of the Data Driven Summit 2014 video series here, including the other demonstrations, BIRT data visualization insights and panel discussions with industry insiders.

Read More

Data Driven Digest for January 9

Each Friday we share some favorite reporting on, and examples of, data driven visualizations and embedded analytics that came onto our radar in the past week. Use the “Subscribe” link at left and we’ll email you with new entries.   Circle of Learning: Acasus, a Dubai-based consultancy that helps governments to reform education and health policy, worked with Vignette Interactive to create one of the most thought-provoking maps I’ve ever seen. Shown above, it displays the number and percentage of children in the world who reach a basic level of education. The color code shows the percentage (red to green equals low to high) and the circle size shows the number of children. In spite of presenting the data in this manner, the finished product still is clearly recognizable as a map of the earth. It’s beautifully done, and the full interactive version is endlessly interesting.     Wet Ones: Along similar lines, Views of the World is a website created Benjamin Hennig, an academic geographer. He collects maps that visualize earth data – from forestry to tsunamis to demographics ­– in myriad creative forms. One of Hennig’s own creations, shown above and linked here, is an animated map that depicts where precipitation falls over the course of a year.  The map resembles a pumping heart as much as anything (Africa and South America, in particular), and it’s fascinating to watch the rains ebb and flow in regions you know well.   Gallery Show: Take a trip through some extraordinary data visualizations in Beyond The Visualization Zoo, a blog post by Mike Beneth that appeared on Data Science Central this week. Beneth writes about his favorite book on data visualization, then illustrates his book report with some usual entries, such as the hive plot above (click through for the interactive version). His examples are all drawn from the D3 gallery. Beneth’s article reminds us how far data visualization has come – and highlights the challenging frontiers that still lay ahead as we try to visualize new sources like genetic data. Do you have a favorite or trending resource on embedded analytics and data visualization? Share it with the readers of the Actuate blog. Submit ideas to blogactuate@actuate.com or add a comment below. Subscribe (at left) and we’ll email you when new entries are posted. Recent Data Driven Digests: January 2: New Year’s Eve, the news in Tweets, nasty flu season December 26: Dudes and bros, football on social media, mapping pictures December 19: Song titles, gender neutral names, Ruble troubles  

Read More

Top 5 Ways You Win By Upgrading Capacity on BIRT iHub F-Type

Upgrades are often worth the cost: Extra leg room for your 12-hour flight; extra-large on your order of French fries; extra data-out capacity on your BIRT iHub F-Type server for data driven apps with embedded analytics. BIRT iHub F-Type is the free BIRT server from Actuate for boosting open source and Java developer productivity. It incorporates virtually all the functionality of commercially available BIRT iHub Visualization Platform. Within the first 15 minutes of installing BIRT iHub F-Type, a developer can import a BIRT report, schedule secure distribution, or export their report as a full-function Excel spreadsheet. Companies of all sizes are currently test-driving BIRT iHub F-Type and evaluating Actuate’s data visualization and reporting technology for integrating analytic functionality into their customer-facing apps before launching them at a bigger scale. Currently, BIRT iHub F-Type is limited only by the capacity of output it can deliver on a daily basis. This makes BIRT iHub F-Type ideal for departmental and smaller scale applications. But sometimes you need more, so when BIRT iHub F-Type reaches its maximum daily output, additional capacity is available as an in-app purchase. Actuate now offers an easy way to expand capacity while in the application. Starting this month, customers can purchase data-out for their F-Type installations in increments of daily 50MB for as little as $6,000 per year. No long-term commitments needed, you only pay for 12 months. Here are five great reasons to upgrade your daily capacity to your BIRT iHub F-Type account: Deliver insights from multiple RDBMS data sources, high volume real-time data, social media and big data sources Embed reports, dashboards and individual visualizations into your own apps with the JavaScript API Enjoy enterprise-grade report and task scheduling, email notifications, secure sharing and row-level data security already built into BIRT iHub F-Type Export to Excel (including formulas), PDF,  PowerPoint and XML formats with ease Interact with on-page data analysis, drag-and-drop dashboards, crosstab analytics and HTML5 active visualizations And if you find that you need to increase your daily data-out limit even further, you can purchase additional data-out expansion packs in 50MB increments without having to call your IT department. Add additional data packs at any time based on your actual usage needs, and credit for the unused part of your existing data-out plan will be applied toward the upgraded plan. In order to gauge how much data will be consumed, BIRT iHub F-Type tool monitors usage and prompts the developer within the software when limits may exceed current capacity. Once you download, install and activate your BIRT iHub F-Type account, you’ll want to check out a series of 30-minute live sessions designed to get you up to speed on BIRT iHub F-Type. Attend at least one session and you are eligible to enter the first ever BIRT iHub F-Type Pro Awards Program. Submissions will be reviewed by an independent jury of professionals. Awards will be handed out to the competition’s winners on a bi-weekly, monthly and quarterly basis. We’ll post additional details here soon, so subscribe (at left) and you’ll be the first to know!

Read More