Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Image

Pentaho 3.2 Data Integration 103

diddy81 writes "A book about the open source ETL tool Kettle (Pentaho Data Integration) is finally available. Pentaho 3.2 Data Integration: Beginner's Guide by María Carina Roldán is for everybody who is new to Kettle. In a nutshell, this book will give you all the information that you need to get started with Kettle quickly and efficiently, even if you have never used it before.The books offers loads of illustrations and easy-to-follow examples. The code can be downloaded from the publisher website and Kettle is available for free from the SourceForge website. In sum, the book is the best way to get to know the power of the open source ETL tool Kettle, which is part of the Pentaho BI suite. Read on for the rest of diddy81's review.
Pentaho 3.2 Data Integration: Beginner's Guide
author Maria Carina Roldan
pages 492
publisher Packt Publishing
rating 9/10
reviewer diddy81
ISBN 1847199542
summary If you have never used PDI before, this will be a perfect book to start with.
The first chapter describes the purpose of PDI, its components, the UI, how to install it and you go through a very simple transformation. Moreover, the last part tells you step by step how to install MySQL on Windows and Ubuntu.

It's just what you want to know when you touch PDI for the first time. The instructions are easy to follow and understand and should help you to get started in no time. I honestly quite like the structure of the book: Whenever you are learning something new, it is followed by a section that just recaps everything. So it will help you to remember everything much easier.

Maria focuses on using PDI with files instead of the repository, but she offers a description on how to work with the repository in the appendix of the book.

Chapter 2: You will learn how to reading data from a text file and how to handle header and footer lines. Next up is a description of the "Select values ..." step which allows you to apply special formatting to the input fields, select the fields that you want to keep or remove. You will create a transformation that reads multiple text fields at once by using regular expressions in the text input step. This is followed by a troubleshooting section that describes all kind of problems that might happen in the setup and how to solve them. The last step of the sample transformation is the text file output step.

Then you improve this transformation by adding the "Get system info" step, which will allow you to pass parameters to this transformation on execution. This is followed by a detailed description of the data types (I wish I had all this formatting info when I started so easily at hand). And then it even gets more exciting: Maria talks you through the setup of a batch process (scheduling a Kettle transformation).

The last part of this chapter describes how to read XML files with the XML file input step. There is a short description of XPath which should help you to get going with this particular step easily.

Chapter 3 walks you through the basic data manipulation steps. You set up a transformation that makes use of the calculator step (loads of fancy calculation examples here). For more complicated formulas Maria also introduces the formula step. Next in line are the Sort By and Group By step to create some summaries. In the next transformation you import a text file and use the Split field to rows step. You then apply the filter step on the output to get a subset of the data. Maria demonstrates various example on how to use the filter step effectively. At the end of the chapter you learn how to lookup data by using the "Stream Lookup" step. Maria describes very well how this step works (even visualizing the concept). So it should be really easy for everybody to understand the concept.

Chapter 4 is all about controlling the flow of data: You learn how to split the data stream by distributing or copying the data to two or more steps (this is based on a good example: You start with a task list that contains records for various people. You then distribute the tasks to different output fields for each of these people). Maria explains properly how "distribute" and "copy" work. The concept is very easy to understand following her examples. In another example Maria demonstrates how you can use the filter step to send the data to different steps based on a condition. In some cases, the filter step will not be enough, hence Maria also introduces the "Switch/Case" step that you can use to create more complex conditions for your data flow. Finally Maria tells you all about merging streams and which approach/step best to use in which scenario.

In Chapter 5 it gets really interesting: Maria walks you through the JavaScript step. In the first example you use the JavaScript step for complex calculations. Maria provides an overview of the available functions (String, Numeric, Date, Logic and Special functions) that you can use to quickly create your scripts by dragging and dropping them onto the canvas. In the following example you use the JavaScript step to modify existing data and add new fields. You also learn how to test your code from within this step. Next up (and very interesting) Maria tells you how to create special start and end scripts (which are only executed one time as opposed to the normal script which is executed for every input row). We then learn how to use the transformation constants (SKIP_TRANSFORMATION, CONTINUE_TRANSFORMATION, etc) to control what happens to the rows (very impressive!). In the last example of the chapter you use the JavaScript step to transform a unstructured text file. This chapter offered quite some in-depth information and I have to say that there were actually some things that I didn't know.

In the real world you will not always get the dataset structure in the way that you need it for processing. Hence, chapter 6 tells you how you can normalize and denormalize data sets. I have to say that Maria took really huge effort in visualizing how these processes work. Hence, this really helps to understand the theory behind these processes. Maria also provides two good examples that you work through. In the last example of this chapter you create a date dimension (very useful, as everyone of us will have to create on at some point).

Validating data and handling errors is the focus of chapter 7. This is quite an important topic, as when you automate transformation, you will have to find a way on how to deal with errors (so that they don't crash the transformation). Writing errors to the log, aborting a transformation, fixing captured errors and validating data are some of the steps you go through.

Chapter 8 is focusing on importing data from databases. Readers with no SQL experience will find a section covering the basics of SQL. You will work with both the Hypersonic database and MySQL. Moreover Maria introduces you to the Pentaho sample database called "Steel Wheels", which you use for the first example. You learn how to set up a connection to the database and how to explore it. You will use the "Table Input" to read from the database as well as the "Table Output" step to export the data to a database. Maria also describes how to parameterize SQL queries, which you will definitely need to do at some point in real world scenarios. In next tutorials you use the Insert/Update step as well as the Delete step to work with tables on the database.

In chapter 9 you learn about more advance database topics: Maria gives an introduction on data modelling, so you will soon know what fact tables, dimensions and star schemas are. You use various steps to lookup data from the database (i.e. Database lookup step, Combination lookup/update, etc). You learn how to load slowly changing dimensions Type 1, 2 and 3. All these topics are excellently illustrated, so it's really easy to follow, even for a person which never heard about these topics before.

Chapter 10 is all about creating jobs. You start off by creating a simple job and later learn more about on how to use parameters and arguments in a job, running jobs from the terminal window and how to run job entries under conditions.

In chapter 11 you learn how to improve your processes by using variables, subtransformations (very interesting topic!), transferring data between transformations, nesting jobs and creating a loop process. These are all more complex topics which Maria managed to illustrate excellently.

Chapter 12 is the last practical chapter: You develop and load a datamart. I would consider this a very essential chapter if you want to learn something about data warehousing. The last chapter 13 gives you some ideas on how to take it even further (Plugins, Carte, PDI as process action, etc) with Kettle/PDI.

In the appendix you also find a section that tells you all about working with repositories, pan and kitchen, a quick reference guide to steps and job entries and the new features in Kettle 4.

This book certainly fills a gap: It is the first book on the market that focuses solely on PDI. From my point of view, Maria's book is excellent for anyone who wants to start working with Kettle and even those ones that are on an intermediate level. This book takes a very practical approach: The book is full of interesting tutorials/examples (you can download the data/code from the Pakt website), which is probably the best way to learn about something new. Maria also made a huge effort on illustrating the more complex topics, which helps the reader to understand the step/process easily.

All in all, I can only recommend this book. It is the easiest way to start with PDI/Kettle and you will be able to create complex transformations/jobs in no time!

You can purchase Pentaho 3.2 Data Integration: Beginner's Guide from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

Pentaho 3.2 Data Integration

Comments Filter:
  • Re:PDI? (Score:4, Informative)

    by xouumalperxe ( 815707 ) on Wednesday June 16, 2010 @02:01PM (#32592668)
    It's a bit more than "database agnostic" as it can input from a load of non-db sources and output into a load of non-db sinks. I work at a pentaho shop, and one of our biggest projects involves, on the ETL front, parsing several gigs of apache logs per day and stuffing the (filtered) results into a db. We do that using Kettle.
  • Re:Enough acronyms? (Score:3, Informative)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Wednesday June 16, 2010 @03:06PM (#32593442) Homepage Journal

    I'd settle for whats it for? and why i'd want to spend time learning how to use it?

    ETL in general tends to mean moving stuff between databases in large companies. Such as when you have lots of things that each run off of their own database, and then have a big "data warehouse" database that everything goes into (usually in a different format than the individual databases, designed for running reports out of instead of day-to-day use). Or when you replace one system with another, or buy another company and want to shut down whatever they were using in favor of your corporate standard system, etc,... and need to move stuff that's in one database in one format, into another database in a different format.

  • by b4dc0d3r ( 1268512 ) on Wednesday June 16, 2010 @03:16PM (#32593590)

    I'm going to expand on this one a bit. When it said data integration, I immediately found out that ETL might be Extract, transform, load. The only reason I know this is because I work for a TLA type company. Kettle seems to be the name of something that already has a name, "Pentaho Data Integration". I'm not sure why it has two names. It is also part of the Pentaho BI suite.

    A good review would give us a link to this tool, so we can figure out if the book is even relevant. Otherwise the assumption is that everyone knows what it is an everyone is using it. http://kettle.pentaho.org/ [pentaho.org] There's a FAQ which deals with usage, not what it's about, and no overview. So despite finding the website myself I still have no idea what this thing does. Does it solve the problem of exporting data from MS SQL Server and re-loading it somewhere else? Cos that's what I need.

    A good review would also indicate if it's a free and/or open source tool, so we can decide if we're even interested in the tool, let alone the book. The source is available and hosted on sourceforge, so that answers that. But there is a separate link under Products for PDI, with links to Buy. Is this a poor attempt at a slashvertisement? Why would I use kettle instead of PDI? Is there a difference? http://www.pentaho.com/products/data_integration/ [pentaho.com]

    A good review would also identify the audience of the book, letting people know who might use it. It's a datbase tool - if I'm a Microsoft shop would I have any interest in reading about this?

  • by Bob the Hamster ( 705714 ) on Wednesday June 16, 2010 @03:24PM (#32593688) Homepage Journal
    "A book about the open source ${ACRONYM} tool ${DICTIONARYWORD} (${NONSESNEWORD} Data Integration) is finally available. ${NONSESNEWORD} ${VERSION} Data Integration: Beginner's Guide by ${AUTHORNAME} is for everybody who is new to ${DICTIONARYWORD}. In a nutshell, this book will give you all the information that you need to get started with ${DICTIONARYWORD} quickly and efficiently, even if you have never used it before.The books offers loads of illustrations and easy-to-follow examples. The code can be downloaded from the publisher website and ${DICTIONARYWORD} is available for free from the SourceForge website. In sum, the book is the best way to get to know the power of the open source ${ACRONYM} tool ${DICTIONARYWORD}, which is part of the ${NONSESNEWORD} ${DIFFERENTACRONYM} suite.
  • by mattcasters ( 67972 ) on Wednesday June 16, 2010 @03:37PM (#32593888) Homepage

    The simple answer is that Kettle is a generic name that is very hard to copyright. Pentaho Data Integration and Kettle are synonyms although Kettle used a bit more often to identify the open source project.

    As for the pentaho.com website... you would think that the webcasts, papers, etc would be hard to miss but hey I guess if you don't need a data integration tool you probably don't know what it's for.

    After I did a Kettle lightning talk at FOSDEM a few years ago I met a student who was working on a thesis. He had been gathering data in a database, originating from some electron microscope (or something like that) for the past 6 months. He said if he had known about Kettle he could have done it in a few weeks at most. The problem is that reaching certain non-technical audiences is a very tough call. Heck, it's even hard to convince those people that claim it's faster to code it all in Java/C/C++/Perl/Ruby or even bf. (see other threads below)

  • by Karem Lore ( 649920 ) on Thursday June 17, 2010 @07:14AM (#32600424)

    you have to edit a whole bunch of XML config files in various directories

    If you use the open-source version, sure. If you use the Pentaho BI Suite then no.

    You have a central configuration console, schema workbench (available free) for schema design. You can clear the cache programatically by way of a URL or using the API (which can be fine grained down to tuple) or through the user or enterprise-console.

    Before spouting such drivel, you should look at what exactly you are using and where you have gone wrong in your assumptions. Then, if you are still confused, contact support should you have a subscription.

  • Re:Enough acronyms? (Score:3, Informative)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Thursday June 17, 2010 @12:31PM (#32603574) Homepage Journal

    So it's what used to be called data conversion in the old days (which is last week, apparently)?

    Maybe. I'm actually in the Data Conversion department, where we use ETL tools to load one-off data dumps for new customers. I think it's that ETL describes the tools, and then data conversion is a subset of what you do with them (other subsets being things like for example EDI / Electronic Data Interchange).

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...