Friday, December 5, 2014

How to Configure Report Scheduler to work on your Jasper Server

To leverage this capability to the greatest extent, it is important to get familiar with configurability of this feature. This tutorial covers some of the most commonly used configuration parameters associated with the scheduling feature.
  • Access to the file system of and the ability to stop and start an operational installation of JasperReports Server
  • Access to a running outbound mail server.
Setting Up the Connection to your Outbound Email Server
In order for the scheduler to successfully send email notifications and distribute reports, it must be configured to connect to an outbound email server.

Open the following configuration file in your preferred text editor: /WEB-INF/
Set the value for the property to your outbound mail server.
Set the value for the mail.sender.protocol property to the protocol used by your outbound mail server.
Set the value for the mail.sender.port property to the port that your outbound mail server listens on.
Set the mail.sender.username and mail.sender.password properties using valid login credentials to your outbound mail server.
Examplereport.scheduler.mail.sender.username=myusername report.scheduler.mail.sender.password=mypassword
Restart JasperReports Server for the configurations to take effect.
Setting the Outbound Email Address for the Scheduler
When the scheduler sends out emails, these emails need to come from a specific email address as the sender. This should be a valid email that is monitored for email bounce backs, etc. so that appropriate action can be taken.
The following steps cover how to define the email address that should be used as the sender of emails coming from the scheduler jobs.
Open the following configuration file in your preferred text editor: /WEB-INF/
Set the value for the mail.sender.from property to the email address you would like emails to come from.
Restart JasperReports Server for the configurations to take effect.

Defining the URI for Emailed Report Links
When a link to a report is sent out in an email from a scheduler job, this link must refer the user back to the appropriate place on the server to access the output. In order for this to occur, the scheduler must be aware of the URI for JasperReports Server.
Example: If the default login page for JasperReports Server is accessed by going to, the URI for the scheduler should be set to
The following steps outline how to set the URI properly.

Open the following configuration file in your preferred text editor: /WEB-INF/
Set the value for the web.deployment.uri property to the URI for your JasperReports Server installation.
Restart JasperReports Server for the configurations to take effect.

Defining the Number of Scheduler Threads per Server
In order to keep the scheduler jobs from using too many resources on a particular server, it is important to control the number of threads used to process scheduled jobs.
  • The thread count controls the maximum number of concurrent scheduler jobs that will run on a server at once.
  • Increasing this count increases the throughput of schedules that can be processed, but may adversely impact the resources available for end-users that are interacting directly with the application.
The following steps cover how to configure the number of threads per server that will process scheduled jobs.

Open the following configuration file in your preferred text editor: /WEB-INF/
Set the value for the threadPool.threadCount property to the number of threads you want to process schedule jobs per server.
Restart JasperReports Server for the configurations to take effect.

Defining the Job Misfire Threshold
The misfire threshold sets the amount of time that can pass before a missed or delayed scheduler job is skipped.
  • Increasing the misfire threshold will allow reports to run even if schedules are delayed due to server downtime or a backlog of jobs scheduled at the same time.
  • Decreasing this threshold is useful if you have jobs that are scheduled frequently.
Example: If a report is scheduled every hour, you may not want the 8am report to run if it is already 9am and the 9am report would give you the same information.
The following steps show how to set the misfire threshold.

Open the following configuration file in your preferred text editor: /WEB-INF/
Set the value for the jobStore.misfireThreshold property to the number of milliseconds beyond the scheduled time that a delayed schedule job should be skipped.
Example: For 30 minutes, org.quartz.jobStore.misfireThreshold=1800000
Restart JasperReports Server for the configurations to take effect.

Disabling the Report Scheduler on a Server
Under certain circumstances, it may be necessary to disable the report schedules completely on a server.
Example: It may make sense to have dedicated servers for end users separate from those that execute scheduler jobs.
The following steps cover how to disable the report scheduler on a particular server.

Open the following configuration file in your preferred text editor: /WEB-INF/applicationContext-report-scheduling.xml
Find the following lines:
<bean class="com.jaspersoft.jasperserver.api.engine.scheduling.quartz.QuartzSchedulerControl"
    <property name="scheduler" ref="quartzScheduler" />
Remove the init-method definition – init-method=”start”
Restart JasperReports Server for the configuration to take effect.

How to Configure Report Scheduler to work on your Jasper Server

Another BI technology I'm exploring right now is Jaspersoft. It's a powerful Open Source BI tool which is a bit similar to Oracle Hyperion. And since it uses Open Source technology, this means it's way cheaper and easier to sell. lol

Going back to the topic, I'd like to share how easy it is to configure your Jasper Server to use it's report scheduler facility.

Step 1 : 
Look for your Web-Inf folder from your Jaspersoft folder. I'm using Jaspersoft's later version so mine is located at ..\jasperserver-pro\apache-tomcat\webapps\jasperserver-pro\WEB-INF.

Step 2:
At the file, add your smtp and email credentials. If you are using gmail, be sure to use the 587 port as 465 port does not work well with gmail.

For example : your mail server( for gmail)
report.scheduler.mail.sender.username= your email's username
report.scheduler.mail.sender.password= your email's password
report.scheduler.mail.sender.from= your email address

report.scheduler.web.deployment.uri=http://localhost:8080/jasperserver-pro (This url should be the same url you see when you open your Jasperserver login page. You only need to remove login.php)

Step 3 (Optional) : 

If you'de like to increase your thread pool, you may do so by modifying the file .

Locate the line org.quartz.threadPool.threadCount and increase the thread count.

For example:
org.quartz.threadPool.threadCount = 3

Step 4 :
At the applicationContext-report-scheduling.xml file, locate the bean id reportSchedulerMailSender . Modify and add the following prop key under the property javaMailProperties.

For Example:

prop key="mail.smtp.auth">true
prop key="mail.smtp.starttls.enable">true

Step 5:
Restart your Jasperserver and schedule your report


$P{SUBREPORT_DIR} + "ILN_REPORT_dp_summary.jasper"

Sunday, November 2, 2014

Data Warehousing - Testing


Testing is very important for data warehouse systems to make them work correctly and efficiently. There are three basic level of testing that are listed below:

  •  Unit Testing
  •  Integration Testing
  • System testing


  • In the Unit Testing each component is separately tested.
  • In this kind of testing each module i.e. procedure, program, SQL Script, Unix shell is tested.
  • This tested is performed by the developer.


  •  In this kind of testing the various modules of the application are brought together and then tested against number of inputs.
  • It is performed to test whether the various components do well after integration.


  • In this kind of testing the whole data warehouse application is tested together.
  • The purpose of this testing is to check whether the entire system work correctly together or not.
  • This testing is performed by the testing team.
  • Since the size of the whole data warehouse is very large so it is usually possible to perform minimal system testing before the test plan proper can be enacted.

Test Schedule

  • First of all the Test Schedule is created in process of development of Test Plan.
  • In this we predict the estimated time required for the testing of entire data warehouse system.


  • There are different methodologies available but none of them is perfect because the data warehouse is very complex and large. Also the data warehouse system is evolving in nature.
  • A simple problem may have large size of query which can take a day or more to complete i.e. the query does not complete in desired time scale.
  • There may be the hardware failure such as losing a disk, or the human error such as accidentally deleting the table or overwriting a large table.
Note: Due to the above mentioned difficulties it is recommended that always double the amount of time you would normally allow for testing.

Testing the backup recovery

This is very important testing that need to be performed. Here is the list of scenarios for which this testing is needed.

  • Media failure.
  • Loss or damage of table space or data file
  • Loss or damage of redo log file.
  • Loss or damage of control file
  • Instance failure.
  • Loss or damage of archive file.
  • Loss or damage of table.
  • Failure during data failure.

Testing Operational Environment

There are number of aspects that need to be tested. These aspects are listed below.

  • Security - A separate security document is required for security testing. This document contain the list of disallowed operations and devising test for each.
  • Scheduler - Scheduling software is required to control the daily operations of data warehouse. This need to be tested during the system testing. The scheduling software require interface with the data warehouse, which will need the scheduler to control the overnight processing and the management of aggregations.
  • Disk Configuration. - The Disk configuration also need to be tested to identify the I/O bottlenecks. The test should be performed with multiple times with different settings.
  • Management Tools. - It is needed to test all the management tools during system testing. Here is the list of tools that need to be tested.
o    Event manager
o    system Manager.
o    Database Manager.
o    Configuration Manager
o    Backup recovery manager.

Testing the Database

There are three set of tests that are listed below:
  • Testing the database manager and monitoring tools. - To test the database manager and the monitoring tools they should be used in the creation, running and management of test database.
  • Testing database features. - Here is the list of features that we have to test:
o    Querying in parallel
o    Create index in parallel
o    Data load in parallel
  • Testing database performance. - Query execution plays a very important role in data warehouse performance measures. There are set of fixed queries that need to be run regularly and they should be tested. To test ad hoc queries one should go through the user requirement document and understand the business completely. Take the time to test the most awkward queries that the business is likely to ask against different index and aggregation strategies.

Testing The Application

  •  All the managers should be integrated correctly and work in order to ensure that the end-to-end load, index, aggregate and queries work as per the expectations.
  • Each function of each manager should work in correct manner.
  • It is also necessary to test the application over a period of time.
  • The week-end and month-end task should also be tested.

Logistic of the Test

There is a question that What you are really testing? The answer to this question is that you are testing a suite of data warehouse application code.
The aim of system test is to test all of the following areas.

  • Scheduling Software
  • Day-to Day operational procedures.
  • Backup recovery strategy.
  • Management and scheduling tools.
  • Overnight processing
  • Query Performance
Note: The most important point is to test the scalability. Failure to do so will leave us a system design that does not work when the system grow.

Data Warehousing - Future Aspects

Following are the future aspects of Data Warehousing.

  • As we have seen that the size of the open database has grown approximately double the magnitude in last few years. This change in magnitude is of greater significance.
  • As the size of the databases grow , the estimates of what constitutes a very large database continues to grow.
  • The Hardware and software that are available today do not allow to keep a large amount of data online. For example a Telco call record require 10TB of data to be kept online which is just a size of one month record. If It require to keep record of sales, marketing customer, employee etc. then the size will be more than 100 TB.
  • The record not only contain the textual information but also contain some multimedia data. Multimedia data cannot be easily manipulated as text data. Searching the multimedia data is not an easy task whereas the textual information can be retrieved by the relational software available today.
  •  Apart from size planning, building and running ever-larger data warehouse systems are very complex. As the number of users increases the size of the data warehouse also increases. These users will also require to access to the system.

  • · With growth of internet there is requirement of users to access data online.

Data Warehousing - Tuning

The data warehouse evolves throughout the period of time and the it is unpredictable that what query the user is going to be produced in future. Therefore it becomes more difficult to tune data warehouse system. In this chapter we will discuss about how to tune the different aspects of data warehouse such as performance, data load, queries ect.
Difficulties in Data Warehouse Tuning
Here is the list of difficulties that can occur while tuning the data warehouse.
·        The data warehouse never remain constant throughout the period of time.
·        It is very difficult to predict that what query the user is going to produce in future.
·        The need of the business also changes with time.
·        The users and their profile never remains the same with time.
·        The user can switch from one group to another.
·        the data load on the warehouse also changes with time.
Note: It is very important to have the complete knowledge of data warehouse.
Performance Assessment
Here is the list of objective measures of performance.
·        Average query response time
·        Scan rates.
·        Time used per day query.
·        Memory usage per process.
·        I/O throughput rates
Following are the points to be remembered.
·        It is necessary to specify the measures in service level agreement(SLA).
·        It is of no use to trying to tune response time if they are already better than those required.
·        It is essential to have realistic expectations while performance assessment.
·        It is also essential that the users have the feasible expectations.
·        To hide the complexity of the system from the user the aggregations and views should be used.
·        It is also possible that the user can write a query you had not tuned for.
Data Load Tuning
·        Data Load is very critical part of overnight processing.
·        Nothing else can run until data load is complete.
·        This is the entry point into the system.
Note: If there is delay in transferring the data or in arrival of data then the entire system is effected badly. Therefore it is very important to tune the data load first.
There are various approaches of tuning data load that are discussed below:
·        The very common approach is to insert data using the SQL Layer. In this approach the normal checks and constraints need to be performed. When the data is inserted into the table the code will run to check is there enough space available to insert the data. if the sufficient space is not available then more space may have to be allocated to these tables. These checks take time to perform and are costly to CPU. But pack the data tightly by making maximal use of space.
·        The second approach is to bypass all these checks and constraints and place the data directly into preformatted blocks. These blocks are later written to the database. It is faster than the first approach but it can work only with the whole blocks of data. This can lead to some space wastage.
·        The third approach is that while loading the data into the table that already contains the table, we can either maintain the indexes.
·        The fourth approach says that to load the data in tables that already contains the data, drop the indexes & recreate them when the data load is complete. Out of third and fourth, which approach is better depends on how much data is already loaded and how many indexes need to be rebuilt.
Integrity Checks
The integrity checking highly affects the performance of the load
Following are the points to be remembered.
·        The integrity checks need to be limited because processing required can be heavy.
·        The integrity checks should be applied on the source system to avoid performance degrade of data load.
Tuning Queries
We have two kinds of queries in data warehouse:
·        Fixed Queries
·        Ad hoc Queries
The fixed queries are well defined. The following are the examples of fixed queries.
·        regular reports
·        Canned queries
·        Common aggregations
Tuning the fixed queries in data warehouses is same as in relational database systems. the only difference is that the amount of data to be queries may be different. It is good to store the most successful execution plan while testing the fixed queries. Storing these executing plan will allow us to spot changing data size and data skew as this will cause the execution plan to change.
Note: We cannot do more on fact table but while dealing with the dimension table or the aggregations, the usual collection of SQL tweaking, storage mechanism and access methods can be used to tune these queries.
To know the ad hoc queries it is important to know the ad hoc users of the data warehouse. Here is the list of points that need to understand about the users of the data warehouse:
·        The number of users in the group.
·        Whether they use ad hoc queries at regular interval of time.
·        Whether they use ad hoc queries frequently.
·        whether they use ad hoc queries occasionally at unknown intervals.
·        The maximum size of query they tend to run
·        The average size of query they tend to run.
·        Whether they require drill-down access to the base data.
·        The elapsed login time per day
·        The peak time of daily usage
·        The number of queries they run per peak hour.
·        Following are the points to be remembered.
·        It is important to track the users profiles and identify the queries that are run on regular basis.
·        It is also important to identify tuning performed does not affect the performance.
·        Identify the similar and ad hoc queries that are frequently run.
·        If these queries are identified then the database will change and new indexes can be added for those queries.

·        If these queries are identified then new aggregations can be created specifically for those queries that would result in their efficient execution.