Incorta offers a wealth of ways to derive new tables and introduce ways in which to accomplish your data engineering needs. Sometimes these ways can be overwhelming and confusing. Watch this Action On Insights to learn about these options and best practices around when to use each.

Watch now to:

  • Understand the different data engineering options in the platform
  • Learn the pros and cons of each and understand when to employ these options
  • See how to monitor the performance characteristics of these derived tables

Transcript:

Joe Miller: Do you see everyone is filtering in we are going to wait a minute or so before we get started so hang tight.

Joe Miller: we're still adding people to this session here, where I give it 30 more seconds and then we're going to go ahead and kick it off.

Joe Miller: let's go ahead and get started welcome everyone, we are want to welcome you to february's action on insights webinar of.

Joe Miller: This is our monthly webinar series where we share best practices and tips and tricks to help our customers unlock the full potential of in quarter.

Joe Miller: Today we're going to learn a little bit about data engineering best practices and in quarter, just a few introductions before we go ahead and get started.

Joe Miller: My name is Joe Miller i'm the senior director of Community and customer enablement here and in quarter and i'll be your host for this session today.

Joe Miller: joining us here, we have one of encounters best and brightest our senior solution architect Jeff well.

Joe Miller: Jeff has joined a quarter for over three years and has over 25 years of experience in the bi space so we're very lucky to have Jeff on this webinar today to talk a little bit about best practices.

Joe Miller: Before the session get started, I just want to make men mentioned that if you have any questions or comments go ahead and enter them into the chat window after Jeff presents here today we'll try to revisit some of the questions and get some of those questions answered.

Joe Miller: On two objectives of this session, we want to make sure that the end of the session, you can understand the different data engineering options available on the platform learn the pros and cons of each and understand when to employ some of these options.

Joe Miller: And then we are going to talk a little bit about some best practices for managing encoded data, so I will stop there, and hand it over to Jeff to get us kicked off.

Jeff Wilgus: Thank you, Joe and good morning or early afternoon everybody it's a pleasure to be presenting this topic again for the second time.

Jeff Wilgus: I think we presented it about a year ago and it's obviously a topic that is is on many of our customers and and minds as to.

Jeff Wilgus: As to what are the proper approaches and practices for building out your in court environment with your data so.

Jeff Wilgus: Data engineering, or what we're doing.

Jeff Wilgus: As data scientists, you know, has been evolving over the years, obviously i've been doing this for for 25 years, and many of you i'm sure i've been doing the same, but you know we've always been in the.

Jeff Wilgus: In the business of taking our source system data and some somehow trying to present it in a way, with the technologies that were available at the time to report for our businesses in in the most expeditious and and and and meaningful meaningful fashion that that we could right so.

Jeff Wilgus: You know, three years ago, or four years ago, when I came across the quarter technology.

Jeff Wilgus: At and just realize what a game changer it was and and and how it really was.

Jeff Wilgus: has the ability to change the way to to build out data and reporting completely differently than all the other enterprise level tools that had been using for the last 15 to 20 years right so and most of you know which those tool names are but I won't mention them here so.

Jeff Wilgus: that's what we're really here to talk about today, because at the end of the day, you still want to present your user, with the best view of the data getting it to them as quickly as possible and and having an environment that's ultimately easy to maintain and and and control so.

Jeff Wilgus: Right So how do we accomplish data engineering in in quarter.

Jeff Wilgus: The first statement is we do extract load and transform as opposed to extract transform and load right for those of you that have taken source system data and build star schemas.

Jeff Wilgus: From your ODS is into you know various star schemas and cubes depending on what your your your tool was you understand what you're doing right, we were we were reshaping the data before we loaded it into.

Jeff Wilgus: its final resting spot so that whatever bi application we happen to be using could use it right, but in quarter does not do that, we load the source system data natively We then model that source system data so that the source system data itself.

Jeff Wilgus: extract and we load So these are being quarter mirrored based tables from your source systems right, this is step one we bring in your data and we create the logical joins between your various.

Jeff Wilgus: tables and come up with a business model and boom if you don't need any transforms you're ready to start.

Jeff Wilgus: Reporting on your data right now right.

Jeff Wilgus: rob I guess is it's it's never really that easy right guys.

Jeff Wilgus: If if this picture, right here represents EBS but.

Jeff Wilgus: Our Oracle financials and I also wanted to bring in another one picture that that was salesforce and then another picture, what with some.

Jeff Wilgus: One of your main historical mainframe transaction system right and you want to be able to do consolidated reporting over all three well, we all know, having done this for years and years, those things don't naturally just joined together, you know.

Jeff Wilgus: Product code in one system could be the same product code in another system, but it could be using a different.

Jeff Wilgus: nomenclature right, it could have a different like leading zeros or it has a suffix or a prefix or something like that right so so you have to you know.

Jeff Wilgus: do things to transform the data to to create those joins between the disparate systems right and that's what we do in court, and we use a number of different approaches to creating those.

Jeff Wilgus: tables that assist us and in modeling the data that we really want to report on whether it's a single source system or multiple source systems right we use a combination of aliases materialized views in court up tables and sequel tables sequel tables we'll talk about in a little bit.

Jeff Wilgus: To then augment the model right to to give us.

Jeff Wilgus: Sorry, my phone.

Jeff Wilgus: to augment the model so that it gives us the view that we actually want to report off of.

Jeff Wilgus: So we have a toolbox, what do we do, what are we actually doing.

Jeff Wilgus: There are many ways to derive new tables in in in in quarter right when you're trying to augment the tables that you've pulled in from your.

Jeff Wilgus: From your source right we have materialized views those materialized views can be either spark sequel.

Jeff Wilgus: postgres sequel that can be written in Python itself using pi spark they can be written in our or scala right it's really your choice, but you can transform and materialize whatever views you need in encoder using these processes.

Jeff Wilgus: In quarter analyzer tables are exactly what they might sound like the analyzer itself when you think of what one of the most common insights is is the list right.

Jeff Wilgus: I I take a bunch of data and I drop it into a list and it gives me rows and columns and I can export it to excel, for example, which is, which is what many people do right but that same concept.

Jeff Wilgus: Being able to use the analyzer and all the tools and filters and and and formulas to come up with a come up with a grid a table right using the analyzer tool itself, and then you can save that.

Jeff Wilgus: As a table in a schema that gets refresh that load time okay alias tables are very powerful tool in in in quarter and the example that I like to use for them is.

Jeff Wilgus: Is date is the date dimension right, we have a very powerful date dimension that allows you to do things like.

Jeff Wilgus: You know you just join on the date and you get all these other columns for free like what we can, is what month, it is what day of the month, it is.

Jeff Wilgus: What was the same date last year, what was the same date last month right stuff like that so there's all this business logic that you can get from the date dimension, without running you know, a date some kind of date formula for example right.

Jeff Wilgus: But.

Jeff Wilgus: I have many dates in my system, I might have a.

Jeff Wilgus: insert date I might have a created date I might have a posting date I might have a.

Jeff Wilgus: Shipping date right they can't all be using the same physical table or the same table, meaning it everything is joining to the same date, because then in quarter or.

Jeff Wilgus: would have a very difficult time trying to define its joined path right, so instead, you can have one physical table called date, and then I can create aliases of that one for shipping date one for order date one, for you know pick pick date one for.

Jeff Wilgus: Create date right so and those can exist all over your model right and they they're physically, you know, to incorporate a.

Jeff Wilgus: And parquet and all that kind of stuff it's physically the same table, but logically it's being used with a different alias so in quarter things it's a different table, so the joins the joins then become much easier when you're using aliases.

Jeff Wilgus: In quarter over in quarter tables.

Jeff Wilgus: we've we've used that in the past, this this feature is kind of becoming at least logically deprecating right.

Jeff Wilgus: we've always used in quarter over quarter tables and and for those of you that don't know that's when when I write run I write sequel perhaps.

Jeff Wilgus: Why create a table I create a postgres internal connection to incorporate itself, and then I can actually write sequel against my in quarter tables, maybe to create a different view or something like that to create a new table, but i'm using sequel to do it.

Jeff Wilgus: And we always call those in quarter over quarter sequel tables.

Jeff Wilgus: But it was using the internal postgres driver that ships with the product and they weren't they didn't necessarily perform very quickly and things like that so.

Jeff Wilgus: Now that now that the materialized views support a postgres sequel syntax right, you can do the exact same thing, using a materialized you writing your sequel and postgres sequel.

Jeff Wilgus: And and doing that that same thing, so you don't you don't have to do the old way in court over in court anymore, you can do it in in a materialized view for postgres sequel or.

Jeff Wilgus: In starting with version five and forward, we now have a new capability something that's called.

Jeff Wilgus: Just a sequel table and a sequel table is where you can write we'll get into much more detail on this in a minute, but you can write a very complex sequel and you can load those sequels into a schema and.

Jeff Wilgus: And you can create them using almost the entire syntax its associated with with with sequel including nested sub selects CTE structures, etc, etc, etc right so.

Jeff Wilgus: Okay, so.

Jeff Wilgus: How do we accomplish data engineering in in quarter.

Jeff Wilgus: Well, first of all we're bringing in in data from our disparate data sources right, whether they those be traditional databases.

Jeff Wilgus: Data lakes, such as aws s3 snowflake adl last you know.

Jeff Wilgus: google's storage layer flat files third third party applications as we've mentioned before, such as UBS SAP or salesforce.

Jeff Wilgus: And then we have a partnership with a company called see data, and they have a, I think, as many as 200 jd BC data data connectors right and we seamlessly integrate with many of those.

Jeff Wilgus: there's a whole list of them that are already in the documentation that have been fully adopted and actually brought into encarta.

Jeff Wilgus: And you know but, but you can also use our custom connector if you wanted to use one that was one of the less used ones and and just try it out, to see if it works.

Jeff Wilgus: But it's getting to the point where our access to remote data is getting to be you know almost like we can connect it just about everything right so it's it's getting pretty cool so.

Jeff Wilgus: um then we model we create we do all the things that we've we've we've always done right, we have to still model we.

Jeff Wilgus: We create we create we do the transforms that are needed in our m views or are derived tables right, but ultimately that's creating our direct data mapping connectivity.

Jeff Wilgus: to your business data right but it doesn't matter you know that's the benefit of our direct data mapping it doesn't really matter which of those disparate data sources that data came from right.

Jeff Wilgus: and your user is never going to know it right, whether or not you're presenting the that data to the user at the schema level or at a business view level, which is what we'd really recommend you to do.

Jeff Wilgus: They have no knowledge or idea of the choices that are taking place behind the scenes to make this magic all work right they just know that it.

Jeff Wilgus: It works and it's fast right.

Jeff Wilgus: And we use normal you know extract transform and load techniques, you know we have formula builders and all the things that we have in our analyzer tool and and in our formula formula builders for for adding and changing changing data as it's coming in.

Jeff Wilgus: Right and then we finally we present the data.

Jeff Wilgus: to your business users using hopefully business views and dashboards that they understand.

Jeff Wilgus: so that they can quickly derive insights into what their data offers.

Jeff Wilgus: and obviously we we also have the security capability of showing the data to a specific user based on a business profile, so they only see what they're supposed to see and not.

Jeff Wilgus: Everything which comes into play, a lot of times in sales and and things like that our HR right things like that so.

Jeff Wilgus: So what are some of our best practices for managing and quarter and being in professional services and starting every engagement from.

Jeff Wilgus: Basically step zero, this is one of the more common questions about how should I be organizing my data as I bring it in what what should my plan be like so.

Jeff Wilgus: You know, we have the calm the concept of schemas and business fields.

Jeff Wilgus: You should you should group, you know, like data together, meaning don't don't take don't take data that really belongs to the same source and split it up into multiple.

Jeff Wilgus: schemas just because you think that it makes sense to do that from a business perspective because the source is still the source and you might.

Jeff Wilgus: be giving yourself up a pain later if you try and split those.

Jeff Wilgus: let's just say you have 100 tables and you took 25 of your tables and put them in one schema and 75 in another schema right.

Jeff Wilgus: Then you have to manage how you're going to load those because they have if they can't be loaded at the same time, you have to create some staggered schedule or or some other approach for doing that it's it's usually best to keep your your.

Jeff Wilgus: Similar data together right.

Jeff Wilgus: You can also group your data based on the frequency of change right or group data based on its dependency on other data as well, so.

Jeff Wilgus: If I have a set of common dimensions, for example, that only need to be updated once per day.

Jeff Wilgus: Then, maybe some of those tables and or and, sometimes, these are derived tables because i've brought in on my source systems and then I create some common dimensions.

Jeff Wilgus: That are that are taking data from multiple schemas and bring it into kind of a.

Jeff Wilgus: dimension tables like maybe product or customer or something like that right, but maybe that data only needs to be refreshed once a day right.

Jeff Wilgus: That could live in its own schema so that since our scheduled jobs, our schema our schema based you could schedule that schema to load only once per day.

Jeff Wilgus: Now, maybe there's some transactions that you would like to load every 15 minutes right.

Jeff Wilgus: You wouldn't want to be reloading your your daily dimensions every 15 Minutes it wouldn't make sense to do so you'd be putting extra strain on your system and.

Jeff Wilgus: It doesn't really make sense right, so you could have different schemas depending on the frequency.

Jeff Wilgus: That you that you want to load your data on daily or hourly or even more frequently than that right so that's that's the gem that's my general statements on that topic other back best practices include always remember to set your key fields on all your tables.

Jeff Wilgus: that's that's imperative for the performance of incremental loads right, we only do inserts and updates in encarta we don't do deletes.

Jeff Wilgus: So the only way for us to know at a record level whether or not that record is an insert or it's an update is based on the key that's identified in in the table, if the keys are not set it does not know what to do, and you could insert duplicates into your table.

Jeff Wilgus: And remember the specific way that.

Jeff Wilgus: That.

Jeff Wilgus: encarta uses joins the default join behavior is a left outer join.

Jeff Wilgus: And if we think of that in the old dimensional modeling.

Jeff Wilgus: Rule right, where I had let's say money in the middle, with a bunch of dimensions on the outside, I had, I have a child table that had made maybe has millions of dollars of sales in it.

Jeff Wilgus: But each one of those child records is going to connect to eat zero or one parent record okay consequently every parent record can can can connect to zero, or many.

Jeff Wilgus: Child records right so um we do this on purpose right the default to left out or join us on purpose, because we have the as as many of you are aware, we have the ability to set based tables, for example in our.

Jeff Wilgus: In our.

Jeff Wilgus: Business us right, so if you've set a base table that that's equal to your transaction level detail.

Jeff Wilgus: That is going to be the grain of your results set if it's got 100 million rows in it 100 million transactions and you make that the base table in your business view.

Jeff Wilgus: that's where the query plan starts it starts from that table it is the child and then it's going to look for all of the the path to any other table from that child.

Jeff Wilgus: To get to everything else right, and this is where aliases can come into play too, because sometimes if you have.

Jeff Wilgus: joins that are doing one too many, on one side of a table and then and then many to one on another side of a table it can break a joint path so you might have to construct an alias to make that join actually work.

Jeff Wilgus: Because you've chosen a specific base table that that you're saying this base table is the.

Jeff Wilgus: is really the source of what i'm really interested in looking at this is where my sales is, and I want everything else, the entire joint path.

Jeff Wilgus: To be derived from that base table and REC and remembering that in quarter is doing that, based on its edm and it's doing it on the fly right.

Jeff Wilgus: It can't have multiple join past to try and determine which is the best one it wants to arrive at a single joint bad okay so that's where i'm reinforcing here, you can use aliases to enforce a specific joint path right.

Jeff Wilgus: So that.

Jeff Wilgus: You know i'll just use an example, so that encarta doesn't have to you know join in five other unneeded tables, for example, to get to a to get to another table that's maybe somewhere else in the model that could be placed closer to the.

Jeff Wilgus: The child fact table.

Jeff Wilgus: Okay.

Jeff Wilgus: And then be mindful of joins that cross schemas these are a much more difficult scenario.

Jeff Wilgus: A lot of times the grains from one system to another do not match so many times, you might be creating bridge tables or something anyway, to be able to cross those boundaries right, but these these joins become very important and then also.

Jeff Wilgus: The order in which you load the schemas then becomes important too, because you know if you have you know, depending on which schema needs to be loaded first in order that that that the joins all get populated correctly so.

Jeff Wilgus: All right, um I put this I put this a.

Jeff Wilgus: matrix in here for a reason because many times, you know we've already talked about well there's an analyzer view there's an there's an analyzer.

Jeff Wilgus: there's an analyzer view which is using the analyzer to create a business view in the business view layer it's no different from the analyzer table, but it's run dynamically at runtime okay.

Jeff Wilgus: So if if it's if it's a sequel that's going to run very fast it's absolutely possible to create a view of your data using the analyzer in in the business view right so obviously it doesn't have a park a file.

Jeff Wilgus: It can be queried by external data sources right, so you can kind of use this matrix right in quarter analyze our tables if they do not have parquet right so.

Jeff Wilgus: They only exist in memory and they're loaded every time the schema gets loaded and they they persist 100% in memory and.

Jeff Wilgus: Which which means if us start and stop your services, for example.

Jeff Wilgus: You know that table is going to disappear temporarily and let until you reload that schema.

Jeff Wilgus: So that's one thing you should know same thing for an end quarter, the new encoded sequel table right it's it's the same right it's it exists only in memory right so but it's super fast, because the engine is performing the entire query OK.

Jeff Wilgus: And then of course we have our em views, and then we have our old in cordova and quarter tables, but you can kind of refer to this matrix when you know, can I get to it from sequel for power bi you know, can I reference it in an m view.

Jeff Wilgus: Does it support sub queries you know, etc, etc right.

Jeff Wilgus: Just a little cheat sheet for you.

Jeff Wilgus: So I wanted to talk a little bit more about this new feature which is the in court, a.

Jeff Wilgus: sequel engine we've already said that you know you can write you can write sequel and it and it creates a table for you in memory in in the schema.

Jeff Wilgus: But it's it's really very powerful and the reason why it's very powerful is because we can use any you can, if you are if you're a sequel guy or your it group is is is a sequel guy and you've always.

Jeff Wilgus: You know it frustrates you that our our our normal behavior with sequel joins, for example, is always left out or join or whatever right well using a sequel table, you can use any type of join.

Jeff Wilgus: Right and.

Jeff Wilgus: It also covers a much wider.

Jeff Wilgus: coverage of antsy sequel even greater than the postgres sequel.

Jeff Wilgus: syntax right.

Jeff Wilgus: You can query any physical table from any schema that the user has access to.

Jeff Wilgus: The load sequence inside the schema is maintained so if you have table a and table be or or two regular in quarter tables and then you create in court, a sequel table see.

Jeff Wilgus: It will not load until table a and table be have already loaded right, so it will maintain its its referential integrity, if you will that's not really the right term to use but.

Jeff Wilgus: But if it's using whatever its dependencies are it will keep them as long as they're in the same schema and that's an important point.

Jeff Wilgus: because obviously it can't control if a if it's referencing a table from another schema.

Jeff Wilgus: You can create joins to include a sequel tables and other tables, just like any other table oops.

Jeff Wilgus: What what are some of the things that it can't do.

Jeff Wilgus: The new sequel table cannot read from a business view.

Jeff Wilgus: It cannot read from another sequel table or an inquiry to analyze your table it can only read from actual in quarter tables.

Jeff Wilgus: It does not have sparked fall back.

Jeff Wilgus: Right, so if you're familiar with, with some of our other sequel I processing where you're trying to do a query from tablo or something using port 5436 the engine tries to run the sequel if it if it can accomplish the sequel it'll it'll drop down and try and let spark run the sequel.

Jeff Wilgus: On a different port and so that the new sequel engines does not support this.

Jeff Wilgus: It also does not support incremental loads so in in general, I would say, this is a good tool to use since it's 100% memory resident, especially for.

Jeff Wilgus: If you're creating smaller dimensions right you don't want to use it one of these tables if you've got hundreds of millions of rows, for example, right, but if it's a small enough table then.

Jeff Wilgus: Then it's a really good tool to use and it's super fast.

Jeff Wilgus: But it doesn't support incremental load, so it can only do follows.

Jeff Wilgus: Since it's this is kind of a redundant statement, but since it's only in memory, it does not create parquet files and, at least for the moment, if you want to use this feature.

Jeff Wilgus: it's still lab feature, so you would have to go into the cmc go into the configuration tab for lab features and you would turn this on.

Jeff Wilgus: And then you would be able to see that when you go into say adda when you go into at a table and you go down to derive tables, then you can see that you'll have three options in that in that table lists to do either an analyzer table or sequel table or a materialized views so.

Jeff Wilgus: um this is kind of just a visual picture of the of the way the new.

Jeff Wilgus: query engine works right, it can it can query data from any of the.

Jeff Wilgus: Any other tables in the model, but it allows many, too many joins you can do range joins unions minuses intersects sub queries with correlated sub queries right it's very, very, very powerful you can use CTE to create temp tables.

Jeff Wilgus: And you can reference those temp tables in the sequel so.

Jeff Wilgus: Essentially you can do almost anything you need to do, using one of these tables that you can do in secret.

Jeff Wilgus: For now.

Jeff Wilgus: That new feature is only available to the loader.

Jeff Wilgus: So you can't so that that new engine only works in the loader service and that's why you can create it as a table inside of a schema right.

Jeff Wilgus: With upcoming releases that new you know so in so in the in the current in the current view in the loader has the ability to use that engine.

Jeff Wilgus: But if you're coming from outside of in Florida it's still going to go through the normal sequel interface it's going to go to the the regular engine and if not, it falls back to spark this is, this is the.

Jeff Wilgus: awesome sorry, this is the the old picture, but in the future the architecture will change so that even external sequel clients can go through the new sequel interface, which will hit the the new engine and then you'll you'll be able to.

Jeff Wilgus: Get the same benefit for external queries from tablo and.

Jeff Wilgus: Power bi etc so.

Jeff Wilgus: Okay, so that the the best practices are to you know.

Jeff Wilgus: Use your derive tables using sequel based m views, instead of the old in court over encarta method.

Jeff Wilgus: Using quarter tables, where possible, if you don't need to query them through sequel I.

Jeff Wilgus: You can start to use those new that that new engine capability and.

Jeff Wilgus: That it supports you know, unions and the and the breadth of sequel so much better than then then tie spark or spark sql and and even postgres due to some extent.

Jeff Wilgus: If you need fast incremental you can group tables with joins into one schema otherwise separate the tables into schemas so that they process in parallel and.

Jeff Wilgus: I would add that you know.

Jeff Wilgus: Over the last year we've put a lot of content out on the.

Jeff Wilgus: put a lot of content out on the Community for best practices, so there are you know new articles that exist on many topics in the best practices area that you can and data engineering is one of them, but there are many there's.

Jeff Wilgus: code migration security there's.

Jeff Wilgus: best practices for dashboards many different things that you will you'll you'll find in those best in the best practice article, so I would encourage you to go out to the Community and and and look for.

Jeff Wilgus: My job Joe that's it, I guess, we moved a q&a questions.

Joe Miller: Sure thing Thank you Jeff for for presenting there we did get a handful of questions through the session so let's just pick through a few of them one of them was is the sequel table available for production or is it just a lab feature.

Jeff Wilgus: yeah well right now it's a lab feature but it's been a lab feature for quite some time it came with five right, but now we're on.

Jeff Wilgus: Well, depending on if you're in the cloud or on premise right we're already at five dot one dot four right, so I don't know what the exact timeline is for when they're going to move it out of the lab feature, but you know i've used it and.

Jeff Wilgus: And it's pretty it's pretty stable so good good.

Joe Miller: um we had another individual chat in, and this is actually kind of a three part question, so we can pick through them but i'll just mentioned the three components, they would love to hear a little bit about how in quarter handles type to changing dimensions.

Joe Miller: The second part is chain Type one and Type two changing dimensions and the third, which is snapshot effect tables on a weekly or monthly basis.

Jeff Wilgus: OK so i'll talk about snapshots first we do snapshot adding a couple of different ways right one is.

Jeff Wilgus: What we call sparse snap shouting and the other one would call dense snap shouting and for dense snapshot.

Jeff Wilgus: We you can think of it almost as a recreation of every row in the table, based on an effective date for for that row but, but not just a single row.

Jeff Wilgus: that's like every row right, so if i've got a small dimension that has maybe a list of codes in it, or something like that, and I have, I want to.

Jeff Wilgus: Create you know snapshot on that.

Jeff Wilgus: And I have 100 records, then in the dense snapshot I had 100 records on day one, I had 200 records on day two of 300 records on day three.

Jeff Wilgus: Right and each one of those has a different as of date and then you can create your queries based on as update and you're always going to get the right.

Jeff Wilgus: get the right answer right, so you can do that, for your Type two dimensions also we don't do Type two dimension or type three hybrids and things the same way.

Jeff Wilgus: That we would have done when we were doing dimensional modeling modeling back in the day Type one, of course, is is just an update right, so a Type one, is what I think we would do by default, if I have a.

Jeff Wilgus: You know if I have an employee an employee record Jeff wielgus and I moved from Illinois to Texas, and the next time that that record loads.

Jeff Wilgus: it picks up my new address in Texas, is going to overlay the record and that's there's my Type one change, but if if i'm if i'm wanting to create.

Jeff Wilgus: Historical representation in a type in Type two or type three perspective right now we don't handle that natively like some of the other tools do you would have to code that and we've done it, you can code that in materialized view using Python.

Joe Miller: Great Thank you.

Joe Miller: And I think we have one more question come in around kind of product roadmap stuff which, at least for the the scope of this session I think there's a little bit beyond it, so I will reach up to the individual grab a question and make sure that we follow up with them.

Joe Miller: With that i'm Jeff you're on I guess over the committee slide and we can wrap the session here.

Joe Miller: So Jeff mentioned a number of times but we'll just kind of follow through once more as a lot of the information that Jeff has spoken through Jeff and team have spent some time.

Joe Miller: Building this into kind of a knowledge base that exists on Community that in korea.com.

Joe Miller: So we encourage anyone to go out there and investigate that, in fact, I was putting some links in the chat or supporting some of the concepts that Jeff.

Joe Miller: had talked about earlier, and you can also find an article about how to code around those slowly changing Type two dimensions as well.

Joe Miller: So go ahead and join our community, we will have a place for you to have discussions ask questions submit ideas for product as well as participate in some of the knowledge best practices that Jeff and team have put forward.

Joe Miller: With that.

Joe Miller: We will wrap up the session and I want to thank everybody for joining today and join us next month for our next session.

Joe Miller: appreciate it everybody.

Jeff Wilgus: Thanks everyone thanks for coming.

Hosted by:

Jeffery Wilgus-3-2-2

Jeffery Wilgus

Senior Solutions Architect

Incorta(1)
joe-miller

Joe Miller

Senior Director, Community and Customer Enablement

Incorta(1)