Video: 9/18/15 Census Scientific Advisory Committee (CSAC) Meeting (Day 2, Part 2)
(Tommy): We’ve heard about the demographic sample surveys at the Census Bureau. Now we’re going to hear a bit from the economics side. Kevin Deardorff. Kevin Deardorff: Thanks (Tommy). I’m just waiting to get the – thank you. Thanks (Tommy) and thanks everyone else. I’m Kevin Deardorff. I’m the Chief or the Economy-Wide and Statistics Division in the Economic Directorate. And I have two other people here with me today. Jack Moody, who is the Assistant Survey Director, technically I’m the Survey Director of the Economic Census and (Eddie Saliers), who’s trying to hide from us, who is the Operational Director for the Economic Census. So we’re just going to give you some recent accomplishments and goals that we’ve done around planning for the 2017 Economic Census.
We’ve developed an infograph which we refer to in the presentation. It’s available on the Internet but we actually just handed out a copy for you. So it’s, you know, pretty big and it’s easier to see in a large printout than on the Web. So we created those four elements of what we think will make up an efficient 2017 Economic Census. We’ll go over some major program milestones and we’ll do a program summary of where we stand at this point in the cycle. So we’ll start with those recent accomplishments. We developed that infograph as the four key elements. I’m not going to go over the four elements right now. We’re going to go over them in more detail in a series of slides in a few minutes. We’ve identified some planning teams and aligned each of those planning teams with these four critical elements that we’ve outlined. We’re doing research around instruments design for a new single unit and multi-unit electronic instruments.
And then the last thing we mentioned here is what we call our econ-hub vision is really our efforts to come up with things like unit harmonization so that the surveys don’t have different business units that they’re sampling. It also will come up with content harmonization so that we use the same terminology and the same meaning within the directorate and also our goal is to become the trusted source for economic data when data users are going out and looking for our information. So we have some six month goals that we’d like to go over. These are things that we’re going to do during the next six months. Just go be clear, I think in our presentation it wasn’t clear that it wasn’t going to take the actual six month period of time.
So Jack I appreciate your comment on that. We will be able to do these quicker than six months. But these are things that will happen within the next six months. So we’re going to baseline a project plan. That’ll be sort of the detailed contents of operations for the 2017 Economic Census. We’re going to complete our top level work breakdown structure for the schedule for the 2017 Economic Census. We’re going to go much more detailed though with our schedule through at least the mail-out phase of the Economic Census. We’re evaluating going paperless and having that as our strategy. Using our ongoing annual surveys and I’ll talk in some detail later about how this is actually going. We’re going to continue our research for developing new electronic instruments including the prototype development that we’re doing. We are going to continue our content review of what we’re currently collecting and what other people would like to see us collect. And we’re going to be wrapping that up in the next six months.
And then we’re going to begin our early planning for what we’re going to do around dissemination with the data to come out of the 2017 Economic Census. Also during the next six months we’re going to be spending a considerable amount of effort around developing a cost model for the life cycle of the census. We’ve contracted with the MITRE Corporation to help us develop this cost model. The goal here for us is really to do things like look across the survey life cycle. Looking for annual fluctuations and what our spending patterns are. We’ll compare this with our previous cycles of how we spent money around the Economic Census. We’ll make sure – our goal really is to make sure we’re spending funding in the right areas at the right time. And hopefully in so doing we’ll find ways that we can become more efficient during the process. We also want to make sure that we basically have our best investment for the life cycle of the Economic Census. And our goal here is really to take two different pieces.
MITRE is sort of looking with us at a high level, sort of a top-down approach of finding where is our spending based on projects that we used. We’re also collecting a lot of information on the activities that workers do. And they’re going to put that into project server and think of that as sort of a bottom-up approach. And our goal is to take a cost model that actually aligns the top-down and the bottom-up approach. We expect that that won’t be perfectly fitted at the beginning but we’ll use that piece of information in order to allow us to manage what we think we’re doing and what we are actually doing and get those into better alignment. So that’s sort of the goal of this cost model.
Currently MITRE’s in their initial fact-finding phase so we are developing the first cost model that I would consider relatively crude at this point. But it will be refined. And we are also getting staff to do resource-floated schedules so that we’ll have the bottom-up information that we can compare with that top-down approach. So basically both ends of those approaches are being developed in their initial stages right now. So I’m going to take a few minutes and then talk about these four elements of the efficient 2017 Economic Census. The first thing that we’re really focusing on is moving to the 100% electronic Internet data collection. We’re also going to focus on reducing the burden for businesses. We’re going to automate our operations to increase productivity. And the fourth thing is to improve our data products to reflect that ever-changing U.
S. economy. So turning to the first one what I’m going to do is summarize the information that’s on the infograph and you’ll have that at your leisure to look at, more detail later. What we’re really talking about is giving businesses easier ways to respond to our data requests. The outcome of that we will get 100% electronic response. We think the Internet will be much faster, we think it will be cheaper to process since we won’t have paper forms and have to stand up an operation just to do the scanning. But I think when we – see a little bit later I’ll talk about our experience with the annual surveys. There’s probably some cost savings for all (cecine) in the ability to streamline our post mail-out efforts. And that’s going to give us some cost savings here that wasn’t reflected in our initial thoughts, but it’s things that we’re finding out as we’re going through out testing. We think we’ll get more self-response. We’ll get speedier – a business response which gives us the potential to release data earlier and hopefully also at less cost. And we think we’ll get improved coverage and data quality because we’ll be able to do things such as building (edit) into the instrument itself rather than do more costly follow-ups.
The second element’s to reduce the burden on businesses. Here we’re talking about relying on existing business records as well as including research into using data from third party sources. So for third party sources I don’t think we’re in 2017 we’re thinking about replacing either the use of administrative records; which we do quite heavily already. I think what we’re thinking is maybe there are third party data sources that can provide information on some unique area which we don’t have administrative records. So for example we ask businesses what products they produce. And we’re hopeful that we might be able to find third party information that will help us fill in some of that missing information that we don’t already have through administrative records. The outcome is to reduce the respondent burden and to maintain or even reduce costs from the 2012 level of spending.
Third element is to automate our operations to increase productivity. Here’s we’re talking about using Census Bureau-wide solutions to increase the productivity and reduce costs. We’re hoping to eliminate duplicative systems and processes. As an example we’re going to build an instrument to collect information from multi-unit businesses. So, those multi-unit businesses will often won’t have a single point of contact, but then take that inquiry and split it into a number of different people who may provide information to respond for the entire company – maybe even at an establishment level. But if you think about that conceptually, that’s not terribly different than what the Decennial Census might want to do with (enumerating recorder). They might want to contact the single point – a person like, for example, at a college.
Maybe a registrar – and then that person wants to separate that questionnaire into different dormitories who would respond for them and then concatenate that back into a response. Well, we recognize that there’s a lot of similarities in some of these high-level approaches. I believe in the last conversation you were talking about where does the unique distinction occur. Well, we think the platform for reaching out and then responding to people is something that we can probably do jointly. And then the unique questionnaire would be the part where you start differing in what you’re asking people to produce. We also think we can increase our productivity by using (duro)-wide enterprise solutions. We’re looking at that for both the data processing as well as the data dissemination systems that we would use. We think we’ll be able to better allocate our staff resources, because they’re not learning unique systems. They can be moved from one system to another – one project to another, because they know the systems.
I mean, we also think we’ll be able to enable users to combine Census statistics – the economic Census statistics with other information, and that information could be inside the bureau that’s collected or it could be outside the bureau that’s collected. So we’re thinking about things like (EPI) – the Census Business Builder tool that I think you heard a demo on yesterday. Those are opportunities for us to produce our data in a way that will allow people to tailor their own uses, rather than use -and find – information the way that we’ve sort of hard-coded it into tables in the past. The fourth element is the improving the data products to reflect the ever-changing U.S. economy. Here we’re talking about two major activities.
One is updating the content to reflect the changing economy. And the other one is to disseminate information in a more timely and – more timely manner – and have more relevant results. I was just talking to (Andrea) during the break about sometimes we’ve heard a number of people criticize us for actually being too detailed with the number of data products we release. For any of you that are familiar with our geographic area summary reports that we’re releasing right now, that series is hundreds of tables – or literally, thousands of tables – that are coming out. And through the American fact-finder. So you have to go find that specific piece of information, and we’ve heard from a number of people, it’s like looking for a needle in a haystack. So I think we’re looking for ways in which we cannot just improve the content, but improve the delivery of the content in order to make it easier for people to find what they’re looking for.
So the intended outcomes here are a more accurate picture of the changing economy, as I mentioned – helping users find the information to make those data-driven decisions. And I think what we’re really ultimately hoping is an improved relationship with our data users, a number of which will also be the data providers. So in terms of the major program milestones, similar to the 20/20 Census, we have phases for the work we’re doing. We’re currently in the research and testing phase, this year and next year. During this phase we’ll conduct the research to determine the new content and make sure that we can support this goal of 100% electronic collection. We’ll develop our collection strategy and develop our electronic instruments. Actually, the next year we’ll develop the electronic instruments, then we’ll finalize our content. That will then roll us into our second phase, which is the implementation phase. That will occur in 2017 and then 2018.
During this time period, we’ll get the (LNV) clearance to conduct the Census. We’ll begin the respondent outreach and the electronic mail-out for the data collection. And in 2018, we’ll actually begin data processing for the data that we are collecting. That will then push us into the dissemination phase for the economic census. That will cover 2018 to 2020. With a similar pattern of the data we’ll release, we’ll start with an advanced release. Our goal is to release that in December of 2018. Just as a reminder, the 2017 Census is actually for the data for 2017, which means the collection occurs in 2018. That means our advanced report will be in the year in which we actually collect the data. We’ll then release our industry series report in 2019, and follow-up in 2020 with the geographic area series report. So in terms of a program summary, I’ll go over some things that are working well. We have finalized our re-engineering scope for the 2017 economic Census. We’ve taken that information, and we’ve put it into the info graph so that everybody is aware of what we’re doing. This second bullet here is that we reorganized.
We went through a reorganization here within our economic directorate. It probably doesn’t sound like a tremendously big deal, externally, but internally, it was a considerable effort. We’ve gone from being – we’ve now reorganized ourselves functionally, and we’ve developed a plan to actually manage the census programmatically. That sounds kind of like nice language. In practice, for the first time, we’ve actually have one area that’s responsible for aspects of the economic Census. So that’s – that is a considerable change. In the past, basically, each sector of the economy independently prepared their effort, and then all of that had to get coordinated and integrated into something that, you know, in a way was stitched together at the back end. Our goal is to prevent that from happening so that we can find efficiencies and do our job a little better. This roles down from a tremendous number of things, including what is very hard to see externally, but creates a lot of effort internally, with things like different approaches to doing implementation.
Different approaches for doing disclosure avoidance, or self-suppression, or the use of noise. So we are actually standardizing all those things. In a way, I like to say to people, this is the part that takes a lot of effort and is hard to see. It’s like rewiring the house. If you’re on the outside, you don’t see much of a change, but inside it really does strengthen the infrastructure, and it takes a lot of effort, and it’s something that we think will provide a lot more longevity to the program as we move forward. We’re also testing various paperless strategies for the data collection, using our annual survey. This sort of started last year with our – well, actually, it started even before last year. Our survey of business owners in the last census was an electronic data collection, and we thought that was fairly successful. What we wanted to do, though, was make sure if we pushed this to different annual surveys, could we get the kind of information that either told us that the goal of collecting 100% through the Internet was realistic or not. So this past fall, we decided for the services annual survey – the annual retail trade survey and the annual wholesale trade survey – to not create paper forms.
We actually went 100% electronic data collection. And we figured, if we have to be ready for the 2017 census, we might as well start now and find out where any pain points would be. And give ourselves a couple years to refine our approach, figure out where we have some issues, and come up with some contingencies if things weren’t working. Those tests for those three surveys has ((sic)) been extraordinarily successful. We’ve actually maintained our – we’ve maintained the response rate already through one survey that’s not done, and for two of the surveys were slightly higher response rates than previous years. I know you had some conversations earlier about whether response rates were a good indicator of quality. Well, we’ve actually gone through and rather than just getting a response rate, we’ve looked at where that response is occurring. We’re making sure that it’s occurring across all industries so that we get representative results, and we’ve made sure that it happens with bigger companies, so that we don’t – so that we’re weighting those so the bigger companies actually have a larger contribution to the final result as well.
So not only is the response rate higher; I think that we’re comfortable that we’re actually doing a much better job because we’re actually spending more time targeting who we want to follow-up with. So that’s gone quite well. Having said that, those are firm-level surveys. And the Census is an establishment-based survey. So, this coming year, we’re going to go to the annual survey of manufacturers, which is an establishment-based survey and see will we have the same results there. And that also has the single-unit respondents and multi-unit respondents. So, I think what we’re doing is we started with what we thought were fairly easy surveys to implement this approach, and they went well. And now we’re going to start tackling the harder ones this next year. But that still gives us time. For anything we find out, we still have yet one more cycle before we get to a mail-out for the 2017 census. So we’re incorporating that feedback. We’re learning some things, and we’re also – we are setting up opportunities for people who don’t have access to the Internet to provide that information to us over the phone.
We realize that some information is easier to collect than others, so we’ll be testing strategies for how to make sure for people who do have barriers to access – how we’ll get their information. We’re also cross-training analysts to work on both Census and survey information. This is a big deal for us too, because this gives us a lot of resource flexibility. If we suddenly need more people to work on one sector than another in order to get information or to process information, if they’re familiar with the different – if they’re familiar with both the Census and the surveys, they know what to look for. You know, they’ve had that experience over the last couple of years, then they can – we can shift resources without having to temporarily hire people and then let them go. (Unintelligible) talk a little bit about what changes are planned. As I mentioned, part of organizing and managing programmatically means all tasks, the budget, our resources, our project outcomes, our metrics – everything now is going to be organized into these four elements that we talked about for the 2017 census. So we can basically say for the investment and the effort we did, here’s where it went, and here’s the benefit that we got from that.
We’re going to be under use of project management tools – such as things like project servers – so we can actually look at the activities people are spending time on. And we can then work with that information to tie it back into the budget to see where we’re spending our money – and see if that’s where we think we were spending our money – and that’s where we think we can most efficiently spend our resources. As I already mentioned, we are going to expand the testing of our paperless collection strategies. Those are going well, so we want to make sure that we know exactly the kind of challenges we’re going to face in the Census before we actually get to the Census. And I think we’re setting ourselves on a path to be able to answer those questions. (Tommy): Kevin Deardorff: I’ve got three clocks, and they all seem to be close to 11:00, and I need to interrupt for just a moment. Kevin Deardorff: Certainly.
(Tommy): Need to pause for public comments. Do we have anyone who would like to make a comment at this time? If so, please come to this microphone at the end of that table. If you do have a comment, before making a comment, please state your name and affiliation. We don’t have anyone registered, but there might be someone in the room. Okay. (Kevin)? Kevin Deardorff: Thank you, (Tommy). Other changes we have planned, as I mentioned previously – we’re going to use enterprise solutions for both the single- and multi-unit collection instruments. We’re going to expand the instrument testing using prototypes and production surveys. Our new version of a multi-unit collection strategy is something that we’re going to have a prototype available next summer, which will be used at the back end of data collection for the annual survey of manufacturers. It won’t be for all multi-units, but we will have a targeted data collection which will allow us to get feedback on the delivery system as well as the actual instrument itself. And we’re going to provide more timely and relevant content.
Finally, for a wrap-up of the program summary, there are some lessons that can be shared. We’ve had some early successes from realigning functionally and implementing these four elements of the economic census. I mentioned a couple of those already with our data collection. By handing data collection away from people who are responsible from all elements of the survey – collection, processing and dissemination – and giving that to a group that focuses only on the data collection part, we found some strategies in terms of when to conduct follow-up, how to use the types of follow-up, how many follow-ups to do. And, you know, we found on one of the surveys we’ve done almost a 25% decrease in cost for the data collection phase. That’s incredible. I don’t know that we’ll be able to replicate that on every survey. I don’t know that we’ll be able to replicate that on the Census itself, but that alone was a significant improvement. It’s not because the people that were doing the survey before weren’t trying. It’s because they had their efforts focused on a number of different activities, and I think the concentrated effort on this was something that really did improve that.
We’re going to have results and feedback from the surveys implementing the electronic reporting. We have heard some feedback from people who don’t have access to the Internet, and we are working with our – the people developing our data collection strategies to figure out ways to at least collect some information from those people, whether it’s through the phone or some other type of contact if we don’t have paper forms in 2017. The one thing we did have as an ongoing challenge that we wanted to bring up was something that is one of the three discussion questions. And that’s the challenge we have with collecting detail on products that companies produce under the North American Product Classification System. And the challenge here is something that – this is not surprising to us. This is not new to us. We are trying to fully implement the NAP data collection and release information with the 2017 Census. The challenge here is really that the way companies see the products they produce – from their own marketing perspective, through their own taxonomy – is very different than the official taxonomy we use in order to classify products so that they’re comparable between the U.S.
, Canada and Mexico. So I think sort of bridging that gap between how we describe a product and how somebody has used the product they’ve produced is something that’s going to be challenging in order to make sure that we actually have statistics that are completely useful to the people who are giving them to us. So that’s a sort of high-level summary of where we are with the economic Census and, you know, I thank you for your time and appreciate any comments you may have. (Tommy): Thank you very much, (Kevin). We do have a discussant – (Jack Levin). (Jack Levin): Thank you. (Jack Levin). ((Blank tape 17:27 to 17:39)) Thank you. So, thank you, (Kevin). I’m going to give my comments from my area of expertise, which is more, program management, and so then we’ll let the rest of the Committee comment that they’re more fluent with. So from the presentation, I’ve noticed, you know, you developed the info graph, you identified the planning teams, you aligned them and you reorganized functionally. And I thought this was good. I thought the info graph was understandable.
I thought it was good to have a road map. I think aligning teams – as you said – is harder to do than it sounds. But when you have imperatives like the four things on the info graph, the nice part is when the teams are aligned, everything they do should lead to that. So when you end up with people working on something, is this something I should do or not? If it doesn’t fit one of those, it’s not something you need to do. So I thought that was good, and that was a good place to start and that will be helpful for you. Again, you organized your tasks and budgets into those four elements. You caught that six-month goal of creating a WBF. I thought that was way too long, and I’m glad you clarified that. So again, organizing structurally and then organizing the tasks in the WBF to those four goals are a good thing. And we’ll talk about the project management in a second.
You know, I’m not clear on the 100% Internet collection. It’s not my area of expertise, certainly, but I know that few things work 100% so, you know, I mentioned, you know, a risk assessment, and it sounds like what you’re doing – your early testing – is certainly a risk assessment. Whether the risk assessment’s around additional phone calls you’ll get or something but, you know, we get a lot of electronic data, and we’re not 100% with our customers. That one caught me. Whenever you plan something at 100%. I liked the contracting to develop a cost model. But I’m also a big believer in plan/do/check/act. So the cost model should be helpful in planning the Census, measuring how you’re doing against it, and then acting on those results. So I think that’s a good step, which will lead to the other two that I caught. You know, you mentioned maintaining or reducing costs and solutions to increase productivity.
I think we should look to measure that. And make it a stretch goal. You know, maintaining isn’t a stretch goal to me but, you know, let’s hit some numbers that say we’re going to reduce the cost and use the MITRE Corporation analysis to help plan. You set a target. I certainly understand you probably want a little buffer in there, you know, but setting a stretch goal, I think, is a good thing using that, and turn that into a planning tool. I like project management. Your mentioning the tools – and this was kind of – I know it was unsaid, right? It’s more than just the tool of project server. It’s project manager discipline. So, you know, the tool is one thing but, you know, I like schedule and cost control. Earned value is an important tool. And that’s harder to do than it thinks and then, you know, risk assessment issues identification. It’s the constant project management that will be helpful beyond the tools. And I didn’t hear anything wrong.
But make sure the discipline’s there, not just the tool. You did have three questions. I’m going to – we’ll turn it over to the Committee, but one had to do with the electronic 100% response and what ideas we had, and I’ll let that turn to the Committee. The second you mention here was the – what do you call it – the NAPS database – I’m not sure. The concerns around whether people will misidentify things – it’s going to happen. And we’ve had that ourselves, so I’ll give you my experience. We have something called the World Wide Codes Repository. And when we implemented that – the same thing. It sounded good here, but the users, when you started using those codes, may have misidentified it. And then it got worse when you opened it up to customers and let them do it.
So it’s a high likelihood there will be misalignment. But the best ideas I’ve got is a robust description of everything in there so people can look at it, and then some double checks for things that are illogical. And I don’t know if you can do it, but if these things exist, it’s unlikely that this is an accurate code. And expect that there will be some misalignments, especially this is your first time hitting it. And then the third discussion point was how to better disseminate the data. And from there, (Tommy), I’m going to turn it back over to the Committee for those three discussion topics. (Tommy): Thank you very much, (Jack), and I’m going to turn it over to (Barbara) to my right. (Barbara): Comments? (Babs) and then (Dan). (Babs Buttonfield): Let me preface my questions. I am not an economist.
I am not a management scientist. I’m an academic. So this may be a naive question. And (Jack) actually commented on this – that going after MITRE was – sorry – that going out to MITRE was a good idea. But I’m going to ask you completely flip questions. What is the criteria that guides your decisions to go outside for a cost model? And I’m asking that as much to educate those of us on the Committee who are not in business, and maybe to generalize that question. What are the criteria that cause you to go outside to help you in any of your decisions? Your business decisions? Kevin Deardorff: All right. So for this one, from my own perspective, it was fairly simple. We went through a major reorganization, trying to get everybody aligned to just doing a task like data analysis. We didn’t have the skill set within our area, and I think we were just, quite frankly, swimming and struggling to stay afloat.
Just, in effect, think about combining about a dozen surveys, functionally trying to decide who goes where, realigning the staff into a new area and then, in effect, training people to do the new jobs. So I think, from our perspectives – I’m not going to speak for the entire agency, but from our perspective is, it was finding somebody who had the ability to step in immediately, who had the expertise that could help us answer a few questions, and that was really the major reason we turned to MITRE. And, quite frankly, I mean, we had heard good things about the work that they had done. And they have been able, within a couple of months, to turn around some information that we probably wouldn’t have gotten to in near that amount of time. (Babs Buttonfield): What brought the question to my mind is that you said that they’re in the fact-finding stage. And yet, you’ve already got plans for your four – so that’s what brought it up. Kevin Deardorff: No, that’s a great point. So I can elaborate a little bit, and maybe that will help.
Their fact-finding is really in ways in trying to take that top-down and bottom-up approach and merge them into something that’s consistent. So that’s the – the fact-finding is really pulling them together – pulling together the information order to merge them. So, going back, tying that back to (Jack)’s point about the stretch goal for us, I’m relatively new to this area. And the thing I’ve heard from the day I started was we spend a lot of time editing data. And I think what we need is information to actually validate that that’s a true statement. And I think if there is a stretch goal, it may be that if we can identify that that is where we’re spending an extraordinary amount of time and resources, that that might be the area where we can actually find a stretch goal that, you know, will intend to reduce our effort in – by X%. I think we didn’t put something so specifically down there just because we didn’t have the information, but not only would that stretch goal help us with resources – also with timing.
If we could reduce the amount of time it takes to edit data, we can save money, we can get the data out faster. So I think we’re trying to identify areas where we think we can actually make a significant improvement. So that’s really what I meant by the fact-finding, if that helps. (Barbara): (Dan)? (Dan Atkins): (Dan Atkins). I have two questions. First, (Roberto Gigabon) is a member of this Committee at MIT. He’s not here today. But he has, you know, a very fascinating Billion Prices Project. I was wondering if – in where he’s gathering commercial data for economic analysis – have you had – you or your group had any interaction with him? Kevin Deardorff: So I can’t say that I have, personally, but I believe that has been exchanges from the agency with him.
The Director has – we have a research project that is trying to look at the use of Big Data and how it would be incorporated into benefits from the economic Census. You know, I’m not trying to use this as some sort of justification here, but this past year, I mean, literally next week will be our one-year anniversary from the (three) organization. This has been a tremendous effort, just trying to realign staff and activities. (Dan Atkins): Yes… Kevin Deardorff: And we – it’s been very hard – and during that process – to really be thinking about how do we put research into that activity as well. But I think there is a group doing that. And I think we are interested in how their findings go. And I think we’re interested – and also looking at not just that effort, but other efforts, in terms of acquiring Big Data and looking at how that would potentially be worked in here.
I think what I mentioned earlier was I thought there was quite a possibility that in addition to using administrative data from sources like the IRS, we might use Big Data for things like the product lines. And I think those are possibilities. You know, the prices you’re talking about – something that’s more aligned with (BLS), but I think the same logic would apply there. (Dan Atkins): No. I understand disruption and the limits of a 24-hour day. I was just – kind of a general statement that I think the Bureau should kind of take advantage of the particular expertise of the members while they’re members. And I think (Bob Grove)’s actually got him on his Committee because of the nature of the work he was doing there. The second question is so, just the same one more or less that (Jack) made with the previous interactions whether – can you make any comment about the interaction with your future trajectory and the (setcap) initiative? Kevin Deardorff: Well, I mean, I think our intention – when I talked about using – I think it was the third element that – I think it was the third. Yes, third element, where we’re using the enterprise solutions. That is (setcap). So we are intending to use (setcap) solutions.
Obviously we don’t have the field effort that Decennial does. I mean there’s some parts of it that we wouldn’t use. If we go 100% electronic we wouldn’t (ICAID). But for the elements that we do use, we would use (setcap) solutions. Similarly, we will use (fed five) solutions for the dissemination of data, so our intention is to use enterprise solutions and align ourselves with those so we don’t build one-off systems, so that we don’t use that force. (Tommy): It’s (Tommy). Right, just a point of information. (Dan), we have had the (Roberto) on here, at least on two separate visits, and in fact that was one of the – two of the things that led to his being a part of this Committee. And we actually had him tentatively scheduled for a seminar just before this particular meeting, but he was unable to attend.
So, but we – and on some of the Web scraping, worked with the (unintelligible), but yes, I agree with you. He is a great resource for it. (Barbara): Any other comments? (Juan)? (Juan Ballorca): This is (Juan Ballorca). So, (Kevin), you were mentioning earlier that many of the respondents are actually users of the data also? What is the – do you have information on the level of awareness among all respondents as to the usefulness of the data? Kevin Deardorff: It’s a great question. And this is something that I was talking with both (Ken) and (Andrea) during the break. One of the efforts that we’re undertaking right now that started just in the last month is to work with our communications directorate to actually reach out to a number of groups that we know have been active in the past. Whether they’re trade associations – whether they’re, you know, groups like (APTU) – to ask them, you know, what is good about our data? What’s not so useful about our data? The goal of this effort – which will span pretty much the better part of the next fiscal year – is to really get that understanding of our data user community.
Because I don’t think we have as good an understanding of that user community as somebody – as a program like (ACS) does. And I think what we’re trying to do is use that sort of as a model and say, “How do we understand our data users, and interact with them more than we currently are?” I think we’re extraordinarily strong with our interactions with other federal agencies – like BEA, BLS, the Federal Reserve. I think we’ve done that extraordinarily well. I think we could stand to improve quite a bit with everybody else, and I think that’s what our efforts are right now. So the short answer to you is I don’t think we have a great understanding of that, but this group is not just going to go out and talk to people. But they’re also going to do things like the Web analytics to see who is using what products we release, how frequently are they using them, you know, how much coverage do we get after we release a product? Do we get this huge spike and then it just basically disappears for months? They’re going to look at if we refresh our information with things like social media tweets. Does that increase the user traffic back? I think our intention is to look at all those type of things. To see where could we improve and what would we have to do.
The goal is to come out of that process not just with an understanding, but a road map, of how we’re going to do the improvement. (Juan Ballorca): And just a possible suggestion on that – it could be once you learn about the useful products that you’re putting out, it might even be something that you present to people right after they submit the survey. Say, “And here’s something that you might find useful.” You might even make it a little more like, for instance, say “next time” and get some value right away. (Barbara): Anyone else? Oh, okay, (Peter)? (Peter Glynn): (Peter Glynn). This is a technical question, but you talked a little bit about disclosure avoidance and adding noise. Can you say a little bit more about what that’s all about? Kevin Deardorff: So I think the issue here is I was really bringing it up about as an issue of inconsistency. So, we have a program like the survey of business owners where we actually introduce noise into the data we release, where I believe it’s every other program uses the closure avoidance and self-suppression. A couple of our other programs like our county business patterns – yes, our economic Census for the island areas also use noise.
So there’s just a level of inconsistency here. And I think what that requires, then, is two separate groups. A group that, you know, a group detects person in introducing noise, and a group detects person introducing self-suppression. I think what happened is that, at times, we’ve almost complicated our own lives a bit here by saying in order to release as much data as possible into a static table, we introduce more and more suppression into the table. So that we’re – you know, in order to give you as much information as possible, we’re also giving you a lot of suppressed data at the same time. So you might see a table with, you know, a significant share of the table ends up being suppressed. I suspect that’s not terribly helpful, and that was one of the conversations we were having during the break where I said that actually requires the user – with the best intentions of us – that actually requires a user to do more work, in a way, to sort through what’s usable information.
Internally, though, it also creates a lot more work because, you know, the more suppression you put on this, the more complicated that becomes. Especially when you want to make sure that you can’t back out information from tables. So, I think what we’re trying to do is find more consistencies so that we can get efficiency. But also look at what are the data products that are really critical, because maybe we can avoid some of these challenges by also saying, you know, we’ve put the data out there at a level in an API. You can decide what you want to look at yourself. We avoid creating tables with a lot of suppression because we’ve suppressed the data before it goes into the database itself. (Barbara): Thank you, (Kevin). I think we’re going to have to actually to move into lunch and our group discussion. So, if Committee members could get their lunches as fast as possible, so we actually have some time to discuss what we think.
And please note (Tommy) is eating his own food. This will not be a government scandal. Right? I’m going to talk. What I was going to do, folks, is in the next two days – I don’t get home until Sunday. But within the next few days, I’ve sent out an email to all of you, reminding you to put in comments, things you think you might want to have as recommendations or comments. Again, encouraging the designated discussants – and other people – to put in what you think. I will remind you again, we can make official recommendations to the Census Bureau, we can make comments. We can have some discussion now, but we don’t have a whole lot of time. And also we can have discussions over emails about what we would like to have. Remember, we said a couple of things came up that I think there was some level of support for. And when I draft whatever goes to the Census Bureau, I’ll send it to everybody to make sure I didn’t mess it up, so you can fix it.
There was discussion and some agreement, I thought, for to have one presentation – which presentation, at the discretion of the Census Bureau – for there to be a video of it, right (Dan)? Well we can talk about whether we want it to be some of the things we want to discuss… ((Crosstalk)) (Barbara): That was one possibility. Another possibility was for some topic – depending on what it – is to have a general discussion and then break into groups to talk about. And these are fine. These are other kinds of ideas that are somewhat different than the kinds of things we’ve usually suggested in the past. I also wanted to note – (Babs Buttonfield) had another idea that she wanted to present to people for their consideration. Well, people can react some now, and they also can go think about it and react over the Internet. Go ahead. (Babs Buttonfield): This is (Babs Buttonfield).
I was impressed and overwhelmed by (Lisa Blumerman)’s presentation. It was dense, it was informative and I think we didn’t have near enough time to consider it and discuss it fully. And given the importance of the upcoming 2020 Censuses, I would like to propose that we ask the Census to devote more time in upcoming meetings to the update discussions and update presentations on the 2020 Census. Perhaps as much as a half a day. Now there were, I think, seven or eight of us who came half a day early, and I think we all survived. So I’m actually wondering if the proposal could be twofold. First of all to devote more time to it, but given that we have a day and a half meeting, that means something else has to disappear. And I’m not sure that’s a good idea, so I’m wondering if we could propose to take a half day in front of the day and a half that we do, to devote to updates on the 2020 Census.
So that’s a proposal. (Tommy): Just a point of information. The operational plan for the 2020 Census is going to be heard on October the 6th, and the director does – in fact when we met with you, I think, (Barbara) as well – wants to encourage each of the members of the Committee to – if you can – listen in to that discussion, or at least take out the Web site for postings of it. It’s from 1:00 until 4:00 on that day. And I do also know that we are planning – though I don’t know how far those plans are, (Sarah), so I’m looking at you. We are planning some interaction with the Committee – with this Committee – following that, but I don’t know the details of that have been worked out. Is that correct? Is that correct? Are we planning some follow-up with this Committee following the October the 6th, 2020 – we are planning, but the details have not been worked out. ((Crosstalk)) (Tommy): Yes, yes, she made reference to those, so we are planning some of them. By the way, yes, the Decennial always has a wealth of information, and you only saw the tip of the iceberg.
If you don’t know, the Decennial has quarterly meetings – and this is one of those quarterly meetings – where it reports to the (IG) – I think it’s in this room sometime – sometimes it’s in conference rooms 1, 2, 3 and 4. But it’s a kind of reporting to the public of the plans for that. And (IG), the GAO, a lot of stakeholders met. A lot of people from the outside come. Department of Commerce, I mean, to just hear updates on the Decennial census. But this will be one of those. But this particular one, it’s important because it’s our statement to the public of what our thinking is about the design. (Barbara): (Dan) and then (Irma) then (Sunshine). (Dan Atkins): So, to just follow up on (Barbara)’s comment, I – several people mentioned that they thought it was nice to occasionally have face-to-face meetings of these working groups. And, particularly, when you’re trying to build trust on potentially contentious topics. I’ve found coming in for the afternoon meeting on Wednesday to be pretty easy to do – and at least I would be willing to do that more regularly if that was useful. My second point is I would like to recommend that when we hear presentations from various directorates as we go forward, that part of that presentation be some comments as a customer of the enterprise solutions of (setcap) and forth about, you know, how they’re seeing it.
To kind of – you know, we’re getting the provider’s side, but it would be interesting to get the user’s side – or the consumer’s side. And maybe we could make that request for the future. (Barbara): (Irma)? (Irma): I agree that (Lisa)’s presentation was very informative as sort of an overview, but if we have specifics, future discussions – I would like to see some very specifics, like what are the administrative records used. What did they find? Not so much about the process of how it happens, but what are the real insights, and what are the real challenges and where maybe we can. So more specific without, I guess, I would recommend that we see. Someone suggested that we should break into small working groups. I think having working groups meet the day before is probably a good idea. I personally find – even though I don’t understand all the details of the (setcap) or all these other things that are being discussed and feel like I can provide useful comments – I do think there is value to seeing the big picture, so I actually like hearing personally the wider range of topics, even if I’m not expert on all of them. Because it helps me to put the rest of the stuff that I know a little bit more about into a broader context on how it fits within – whether it’s in the (setcap) or other ways.
So I actually would recommend that if we do working groups, maybe have them separate, but not breaking out into smaller groups during the day. I guess I prefer this format, because I learn a lot about a broader set of things that sort of helps me put the bigger picture together. (Barbara): (Sunshine)? (Sunshine): I agree as well. I think sometimes the naïve questions are actually the ones that end up revealing some of the best insights. I just wanted to add the request I made previously that we have someone who is accumulating questions, right? So again, I often find that wow, I’m not prepared to suggest that the Committee make a formal recommendation that if we had follow-up on the questions that we ask, I think that we would be more productive in eventually getting to a point where we could make, you know, figure out if we need recommendations or not. The other thing that I would add is that the group that gave the presentation on the demographic surveys provided a lot of supplemental material for me as a discussant.
I’m not sure if anybody else read it, but it was terrific. So I think encouraging the providing supplemental material actually allows for at least the discussant to have some of their questions answered in advance. And again, to provide a little bit more of robust suggestions. And so, this is where – especially when we’re getting into, again, it’s just about getting the detailed information. (Barbara): One question I have, which is maybe for (Sarah), over there – (Sarah) who knows everything. There’s at some point – stage a Web site related to the Committee. And I wonder whether it would work when there’s supplemental information like that – even beforehand, from Census Bureau staff – if that information could be put up there so that anyone who wanted to look at it, can. And (Sarah) is nodding.
I think that would be a good thing to do, and people could look at it beforehand, or they could look at it afterwards, if they wanted to. So this new thing, we want to make it actually useful. (Sunshine): The final thing I would just say is the Wednesday working group sounds fine. I’m happy to participate, if it ends up working out scheduling-wise. I still am not exactly sure where we stand on the status of working groups and which ones are likely to be approved; which ones – you know, where we are. The adaptive design one is now defunct, so the people that were on that could be helped back to some of the other ones. (Barbara): I think there’s a Big Data group, there’s a Rocket (setcap) group. I don’t know what all – I don’t remember what the other groups are, that actually now, currently exist. (Sarah), what am I missing? What are the other current – you know what the other currently – does anyone know? Man: (Unintelligible).
Group quarters. (Barbara): Group Quarters just died its happy death. Is there a 2020 group? No, I don’t think so. And there’s not an ACS group either. So I think that it’s only Big Data and Rocket (setcap) now. I think that’s right. What? Woman: (Unintelligible). (Barbara): Other questions, comments? (Noel), (Bob)? (Noel Kressy): Well, on Thursday – (Noel Kressy) – we had a report from (Bill Bostick), and a number of us are on the Big Data working group. So that’s the group in existence. And rather than making formal comments, (Ken) and I and (Peter Glynn) and (Willie Deptoe) had informal discussions – not all together, but kind of round by email, and we’d like to make a couple of suggestions. Not for formal response by the Census Bureau Director on that.
What we’ll do is circulate some words a little bit later amongst the group, but the general idea was to commend the Census Bureau on a submission on Big Data that has the potential for leadership both at a national and international level. That the early appointment of the Bureau’s Center director is really important. And another important thing that came up during the discussion is that this Big Data initiative not lose sight of the fact that data quality and estimation quality through (variances) are absolutely fundamental. And that be an important part of the Census Bureau’s initiative on Big Data, that computation doesn’t take precedence and one loses sight of (variance) estimates and quality. So there was a potential spin-off that came out of that discussion that might lead to a new working group, and that was the importance of mixed geographies and mixed frequency-type data that – where data sets seem to come together but don’t all share the same geography and resolution in both space and time.
And this came out of some of the discussions with (Nancy Potok) during the working – during the session as well as afterwards with (Peter Glynn) and myself. And so just both (Peter) and I and others we’ve spoken to think that it’s important that we might have a workshop and a working group on that area. And the order of those isn’t clear right now, but (Nancy Potok) had mentioned that the customers – Census Bureau customers – sort of clamoring for these (unintelligible) answers to questions that involve mixed data sets of different source, different frequencies, et cetera. And that those questions are either – they’re Big Data questions or not, as the case may be. So it’s really a spin-off. I wouldn’t like to think of it as an issue of the subset. It comes up a lot whether the data sets be big or not. And so, I’d like to mention that – we’ll work on some wording, and we’ll have pass it around amongst the (unintelligible).
(Barbara): Just one second, (Bob). That’s – I know you’ll put that in writing, but that’s an example – of a good example- of things that can turn into being sort of comments and suggestions rather than formal recommendations. And also, part of it I’ll underline – we’re allowed to make comments and recommendations about things we think the Census Bureau is doing well. I mean that’s okay too. We don’t just have to criticize them. (Bob)? (Bob Hummer): Okay, thank you. (Bob Hummer). I wanted to go back to something that (Babs) mentioned – the half-day workshop on the 2020 Volunteer. I mean, I think that’s a really good idea if the Bureau is interested in that kind of thing. Set aside that time on Wednesday to kind of go through the different kind of innovations that they’re making for 2020 and – but then, related to that too, if we do something like that, I think the presentation or presentations that we hear – some overview would be very helpful.
But I think – diving deeper and getting specifics on the – especially the things that they’re struggling with would be really useful. Because, yes, it’s great to hear yes, this is going well; that’s going well. But trying to – from my perspective, working on the discussion for that particular session here – was trying to dig through all the pages and pages and pages – and where are the struggles, and what might be an area that we could actually be helpful on with was very difficult. So I think the – it would be great to have such a session if the Bureau also thinks that it’s a good idea, but I think that it would have to be specific enough where I think we could lend our expertise to make it helpful. On that note as well, I think it’s related, but I think the presentation – I think guidance to the presenters on time would be, you know, whether that’s done ahead of time or whether that’s done here would be really useful. But I don’t think they all have to be standard.
You know, some could be five minutes and be very effective; others may be 20 minutes or something, and maybe that we need to go 20 or 30 minutes or something. But I think we need to set guidelines so that we don’t have, you know, hour-and-a-half-long presentations that just aren’t – in the end, aren’t effective. (Barbara): If I could make a comment. All the presentations were really interesting, and I didn’t regret hearing any of it – but I kind of wondered what was going on. Because there’s a set number – amount of time for the whole thing, and if a discussion’s listed – they actually need time to talk, and there is general discussion. And all these people have been to professional meetings and all that. And I didn’t really – although presentations – I really didn’t understand why there wasn’t some kind of self-regulation of how long they were going to talk. And do they really have – if there’s this much time for – do they really have to be told this is the amount of time for your presentation? – come up in what people send in over the Internet – or email.
If they don’t, I’ll write it – is many people have said they would like to see a presentation about planned use and challenges in using administrative data. Like, what – we’ve heard – it’s fine what we’ve heard, but I don’t think we’ve heard enough. We’ve heard a lot about plans to use the administrative data, but it’s often unclear whether this is talking about administrative data to decide whether you don’t even have to conduct the interview. Is this to put in specific kinds of substantive information? Is it for particular people? Is it for, you know, putting in answers to some question in general? And several people have said – and I just wanted to mention it – that presentation and discussion about what they’re actually thinking about for administrative data use and what they see as the challenges and the questions for that would be very welcome.
Who else? (Alison)? (Alison Blair): (Alison Blair). Yes, I just want to tag team on what you just said, quickly. I think the use of administrative data is obviously really, really interesting, and it might also be helpful to start sort of that conversation with an overview of perhaps philosophy. I think about the population estimates program, which is entirely driven by administrative data, right? And it drives federal funding, and it’s a very important program. And so, where did it come about that that program, it was decided, it could be fully driven by administrative data, but other programs, maybe, we are not as comfortable with that. And there may be a strong rationale for that, or maybe that’s just something folks want to start thinking about. But I would love to hear the larger picture and then maybe to drill down on the challenges and issues and opportunities. (Barbara): Who else? (Jack)? (Jack Levin): (Jack Levin). Sorry for – I want to go back to (setcap) for a moment and get some others’ opinions.
So, a couple of weeks ago, I did a Ted Talk on my experience on innovation. I’m sorry for being funny here, but innovation from – you know, I related it to what Arthur Schopenhauer said – the philosopher – that truth goes through three stages: ridicule, then violent opposition and finally acceptance as if self-evident. I hope I’m wrong, but these innovative big ideas like (setcap) eventually get to violent opposition where people say, “I’ve got my way of doing it.” And again, I hope I’m wrong. And I think, as that stage comes, the Committee has to be together saying, “This is a good thing.” I mean, and I hope the day doesn’t come. But you got to be together and you got to get through those stages until you get to self-evident. So my thought is kind of like what (Dan) was saying. Maybe we need to understand more about (setcap).
And as I understand more about it, I start realizing how innovative it is and how big it is and how, you know, much of a change it is in the long-term. So it might be worthwhile to understand (setcap) – but not just the happy path. Not just how everything’s great, but the risks in front of us. You know, the fact that people are going to have to align to one system. And you’ve got to figure out how do you align to one system. How does everybody change their processes. And so that’s my thought, and I’m interested in the Committee’s thoughts in that regard. (Barbara): If I could ask a clarifying question, (Jack), I think what you said was really interesting. That’s not the clarifying question. But that’s – what are you say – are you saying what many of the presenters got asked for there to be a presentation on how (setcap) is getting implemented in the various areas and what the challenges and problems have been, specific to areas of growth.
.. ((Crosstalk)) (Jack Levin): Or will be. (Barbara): …or what they anticipated. Is that what you’re saying? (Jack Levin): Yes. You know, how it’s going to fit in the long-term, and I think (Dan) kind of said it very well – from the receiver’s perspective – how that fits in. I mean, that will be a telling story. And I’m only saying this because I’m in such full support of this that the Committee has to be together when those tough days come – that we understand it, and we say, “Yes, we support this.” Rather than that day saying, “This isn’t what I thought it was.” And help this group get through the violent opposition, should those days come. (Barbara): Okay, who else? Yes. (Andrew)? (Andrew Samwick): (Andrew Samwick). I made one of the comments about administrative data yesterday because I kept hearing it in different contexts. I think the other thing that I heard in different contexts – and maybe it’s because I am at my first meeting – was nonresponse. In the sense that we have such a broad portfolio of different surveys and censuses that are going out, and they’re all sort of struggling with the same challenge that’s fundamental to implementing a survey.
And I sort of wanted at the end – particularly after the tip discussion, I wanted to hear them all lined up together telling us simultaneously – or, you know, so we could compare – exactly the progress they’re making against, you know, the enemy nearly Number 1 of conducting a survey. And so I don’t want to come in on Wednesday next time to do it, but one session – maybe with some read-ahead – would be very good, if there could be some analysis of that across different programs and products (unintelligible). (Barbara): So then, to clarify, even though I sometimes have trouble reading my own notes, what's a nonresponse situation is across the various the various surveys. What the trend has been and what they are doing or planning to do to address it. Is that right? (Andrew Samwick): Right. With the eye toward helping the Committee find a perspective on how that whole problem is at the Census. Then informing that back, to sort of figure out best practices around it.
(Barbara): Okay, who else? (Peter)? (Peter Glynn): (Peter Glynn). I think that, you know, the opportunity for the working groups actually to get together face-to-face makes a lot of sense. The phone conversations, I think, are very, very helpful, but the face-to-face I think would probably also be a useful supplement to that. We’ve also talked a bit about, you know, deep dives into certain areas. Maybe looking a bit more at the 2020 Census and maybe taking a half day to do that. I like these ideas. And however, coming from the West Coast, it’s significant travel time in getting here already. And it’s already a three-day commitment to come here for these meetings. So if we add another – if we move this to a Wednesday afternoon kind of session, that basically adds another day for people coming from the West Coast.
So you know, my alternative to that would be, for example, going a little bit later on Friday afternoon. And maybe some of the people on the East Coast could still get their flight. The people going back to the West Coast, that works perfectly fine to leave later in the day. Time change is in your favor when you’re going back to the West Coast. There are things that potentially we could do on the Thursday, in terms of maybe going a little bit later in the afternoon. I wouldn’t mind ending at 5:15 rather than 4:15. Maybe working over lunch. But let’s see if there are other alternatives if we’re going to regularly extend these meetings to being Wednesday afternoon inclusive. Alternative to that is to maybe think about these other ways of adding more time for discussion. If we’re going to do an occasional Wednesday afternoon thing, that’s a different level of commitment I think would be fine from my standpoint. But if we add it to every meeting, let’s think seriously about the alternatives.
Also, just want to go back to what (Noel) was saying earlier. And I think there’s some very exciting opportunities in terms of integrating different data sets of different granularities of different data types and so forth. And there is a lot of interesting academic research that is now being done on exactly these types of issues that comes up in many different contexts – not just in the Big Data settings. So I hope that when we put together our suggestions, that we can put something in about that and hopefully help the Census leverage on all of the exciting things that are going on here at this point. (Barbara): It’s just – I ask the Census Bureau expert people, is there any impediment to meeting Friday afternoon or meeting somewhat later on Thursday? There’s no reason we couldn’t do that. Because my opinion is – and I may be wrong – is if it’s an orientation or something like that, or a working group meeting involving a small subgroup of the Committee, that Wednesday afternoon might be a good idea for those – it’s not the whole group.
But that, in general, to allow us more time – this is just a thought. You can always tell me I’m nuts – that in terms of having more time, it seems that meeting Friday afternoon might be a good idea, depending on what people think. (Tommy): Just a point of information. I’ve told you my service with the Committee goes back to the ‘80s. I think we did at one time have some afternoons, and then there was a feeling that some people started leaving for flights. So that’s what – I just want to point – as a point of information. (Barbara): Is that – go ahead. (Sunshine): Certainly we would be able to do – and we did this in the Adaptive Design Group – plan working group lunches with members of the Census Bureau, right, so that Thursday lunch, it was always planned that the working groups get together. The only dilemma becomes when people serve on more than one working group, but I think as a really easy solution – like that’s a super easy solution. (Barbara): I was just thinking in terms – I’m sympathetic to the problems and the challenges to people on the West Coast.
And it seems that if we thought we needed more time – what (Peter) was saying – it would be – rather than having a meeting where everybody needed to be there on Wednesday afternoon, it would be preferable to meet that Friday afternoon. If that makes sense? ((Crosstalk)) (Irma): This is (Irma). I think Friday afternoon – maybe not until 5, but until 3:00 or something. I mean, I’m on some other Committees that, you know, they stay on Fridays, and we stay until two or three on Fridays, and no one has left. And a couple of people come from the East Coast, but the -I mean West Coast, but the East Coast people can still get home, maybe late, but they can still get home. (Barbara): It seems to me that with the time constraints we’ve had for discussion and then being able to go to things in some depth, that even if the number of sessions weren’t increased, if we had meetings until 3:00, say, on Friday, there could be a little more space, and there’d be a little more time for the kind of discussion that I think we’re all pretty eager for. Well, we have about three more minutes or something like that. I’m trying to be good.
Who else? (Juan)? (Juan Ballorca): (Juan Ballorca). So I just had a quick request, and this may be just – I may be the only one interested in this. I’d like to have access to some of the – especially the electronic versions of the survey that are being used in this half. It would be nice to try them out. I think sometimes the devil is in the details. So we get all the high-level, but it would be nice to actually be able to experience them too. (Barbara): I think it could be done, and (Sarah) could say. A lot of these are available on the Internet, but what could happen would be – could put on the CSAC Web site links to all the electronic versions of all the surveys, and – does that make sense? So then it would be easy for people to get to those. Does that sound okay? Yes. (Jeff)? (Jeff Lauer): (Jeff Lauer). I just want to second what (Noel) and (Peter) had mentioned about a potentially a deeper dive into, you know, some of the innovative techniques that are being used by the Census.
For example, the in-office address canvassing and what techniques were used for that. The special correlation between data sets – and an example of that was (Bill) – (Bill Bostick) had mentioned with the NPD data not correlating with the Census data. And what we could do as subject matter experts to maybe work through some of those challenges. (Barbara): Anyone else? Maybe we’re exhausted. Well, thanks, gang. I think it was a good meeting. You were all really nice to me for the first time I chaired this, and we’re all – I think we’re constructive and I hope the Census Bureau thinks we’re helpful. And thanks, Census Bureau people for being so nice to us. (Tommy): And now that (Barbara) has finished by noon, the next item on the agenda is closing comments from (Barbara).
(Barbara): You’re all great. Thanks so much, and you’re nice, and you’re fun to go to dinner with. (Tommy): Is there anything else from anyone? I really enjoyed the meeting, and others as well – comments and said, “No need to apologize for having gone over on some of the sessions.” But I hope you enjoyed it as much also. So thank you very much, and I hope everyone has a safe trip back unless – (John) and (Nancy) – any final? Thank you very much. We’ll see you. Thank you..