Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode #205: Steve Franconeri and Jen Christiansen a VisComm Workshop image

Episode #205: Steve Franconeri and Jen Christiansen a VisComm Workshop

S8 E205 · The PolicyViz Podcast
Avatar
305 Plays3 years ago

In this week’s episode of the podcast, I’m playing the recording from the opening moderated panel discussion between myself, Jen Christiansen, and Steve Franconeri at the 2021 VisComm workshop at the IEEEVIS conference. We (the workshop organizers) asked Jen and...

The post Episode #205: Steve Franconeri and Jen Christiansen a VisComm Workshop appeared first on PolicyViz.

Recommended
Transcript

Introduction to Policy Viz Podcast

00:00:13
Speaker
Welcome back to the Policy Viz Podcast. I'm your host, John Schwabisch. Sort of different episode coming your way this week.

Overview of IEEE Viz Conference

00:00:21
Speaker
Now, if you didn't know, a few weeks ago was the IEEE Viz conference. The IEEE conference is primarily an academic conference for those working in the data visualization field.

Creation of VisCom Workshop

00:00:32
Speaker
There are a few workshops
00:00:34
Speaker
prior to the main conference that try to focus on some of the practitioner part of the data visualization field. So along with a few other folks, namely Elvita Ottley, Barbara Malay, and Adrian Garcia from Columbia University, we pulled together the VisCom workshop, which is really about visual for communication, and trying to build out sort of this community where we can get this cross-pollination between the academic side of the field and the practitioner side of the field.

Discussion with Steve and Jen

00:01:03
Speaker
So having said all that, the first part of that workshop that occurred on Sunday before the conference was a moderated discussion that I hosted between Steve Frankaneri, who is a professor at Northwestern University, and Jen Christensen, who is the senior graphics editor at Scientific American. And the conversation was so interesting talking about all the different ways that
00:01:26
Speaker
practitioners can learn from academics during the research and the academic researchers could learn from practitioners in the field, then I thought I would repost an entire discussion here as an episode of the podcast. So in case you weren't able to join the conference and watch it live or one of the recordings on the IEEE Viz YouTube channel, I thought I would just post this as a podcast episode so you can listen to it on any of your favorite podcast providers, from Stitcher to iTunes to Google Play to Spotify,

Replay of VisCom Discussion

00:01:55
Speaker
Or you can watch it if you want to go back and watch it over at my YouTube channel. You can check it out there. And so I'm just going to replay based on that entire conversation. It's about an hour, so it's a little bit longer than the usual episode of this particular podcast. But there's a lot going on. There's a lot of great conversation that came out of that, a lot of great resources and references, all of which I haven't included in the episode notes to this particular podcast episode.
00:02:21
Speaker
So I hope you enjoyed this conversation between myself, Steve Frank Canary, and Jen Christensen. And once again, thanks for listening to the Policy Viz Podcast.

Exploring Academic and Practitioner Perspectives

00:02:31
Speaker
Here's that moderated discussion from VizCom 2021.
00:02:38
Speaker
Good afternoon, morning, everybody. I hope you're well. Very excited for our first session in the Viscom workshop. We're going to have a discussion. We'll see how many fights we can get started. We have two fantastic guests joining us today. We have Steve Frickinari, who is a professor at Northwestern University. And we have Jen Christensen, who is the senior graphics editor at Scientific American. And so the idea for,
00:03:05
Speaker
Our discussion today is to see or take these perspectives on data and data visualization from two parts of the field. So Steve sort of representing with power and finesse the academic side of things and then Jen from the practitioner side, the public communication

Key Insights in Data Visualization

00:03:27
Speaker
side.
00:03:27
Speaker
And so we're going to start very simply with sort of our core question and then what I'm going to do is I'm going to ask Jen and Steve to sort of give their short bio so you have a sense of who they are. I'm sure many, if not all of you know these two folks. So I'm going to have them sort of answer the core question and then we're going to jump in and I'm just going to
00:03:47
Speaker
Give them, feed them a bunch of questions and hopefully we end up with a good conversation and maybe we'll end up with some fights and see who can come out on top, the academics or the practitioners. So we'll see. Okay, so our core first question for today is what should the other party in the data visualization field, researcher practitioner, know about visualizing data information? So what are we each
00:04:15
Speaker
What is each side missing? What don't we know that we should know from each perspective? So what I'd like to do is just start with Jen. Maybe, Jen, you can just sort of introduce yourself and then maybe just one or two thoughts about what do researchers need to know about the practitioner, about the broader communication side of data visualization? And then we'll turn it over to Steve.
00:04:37
Speaker
Okay, well, first of all, no fights, just better understanding, so. I like to press the buttons and get things going a little bit, sort of. I know, I know, that's how you get the viewers. Yeah, right, right, right. And the listeners. Well, so my background is actually in scientific illustration, although for most of my career, I've fluctuated between being a visual journalist and a science communicator that's been at Scientific American and National Geographic and as a freelancer.
00:05:05
Speaker
As John mentioned, I'm currently a graphics editor at Scientific American Magazine. So in our print magazine and website, we cover research, ideas, and knowledge, and science, health, technology, the environment, and society. So there's two of us on the graphics team. My colleague, Amanda Montanez, focuses on the fast turnaround news items. And I generally focus on longer form feature stories. Sometimes we create the visualizations ourselves, but we also hire an art director freelance designers.
00:05:33
Speaker
So let's see, what would I as a practitioner like researchers to know about visualizing data and information? Well, it's likely that most researchers are already aware of this on some levels, so I'm probably oversimplifying here. But sometimes I get the impression that researchers make assumptions about the end goal of a visualization that don't necessarily align with the practitioner's goal.
00:05:58
Speaker
Especially if that graphic is stripped out of its original context. And so sometimes I think that critiques that are centered on whether or not a graphic is successful or not can be misleading. So much hinges on context. So even graphics that appear in the same publication for you could argue even like the same audience.
00:06:18
Speaker
It can have wildly different goals and one graphic may aim to present data as cleanly and clearly and as efficiently as possible. A graphic and another story might just be aiming to prompt self reflection. Another in the same publication might be more playful and serve as a form of entertainment.
00:06:38
Speaker
So Jen before I before I give Steve the mic, can I ask, what do you all use as your metrics of success for a visualization so when we've talked about the end goals are are there metrics that you're using to determine whether a visualization you've produced has been successful.
00:06:56
Speaker
Yeah, so at this point it's mostly about clicks and how long people stay on websites and are they scrolling through a full graphic or are they bailing on part of it, but mostly is the article that that graphic is embedded in doing well and resonating with people on social media.
00:07:13
Speaker
and on the website. But I feel as though we've really lost sight of how that translates to print. We used to do focus groups for that sort of thing, and I haven't done a focus group with people in a room in years. So I feel like we're getting a sense of, you know, are people engaged with the digital content, but we don't know a whole lot more than that.
00:07:36
Speaker
Really interesting. All right, Steve, I'm gonna hand it over to you. So quick bio and then to that core question, what should the practitioners know from the perspective of a researcher? What should practitioners know about visualizing data?
00:07:51
Speaker
I'm a professor at Northwestern in the psych department. I also have courtesy appointments and hang out a lot in design and computer science and the business school. My academic history is I studied visual neuroscience, so visual vision in the real world in grad school.
00:08:09
Speaker
and worked on more ivory tower style studies of what's the capacity of your visual memory, how many objects can you track and sort of simplified displays, and then felt like my displays were getting a little too, too Petri dish, and about 10 12 years ago started
00:08:27
Speaker
doing a lot more translational research inspired by the needs of practitioners that we work a lot in chemistry education, how do we get students organic chemistry to represent and rotate that complicated molecule in 3D, and then I think the majority of our work in the lab in the last 10 years has been
00:08:44
Speaker
data visualization. So how can we leverage the power and avoid the limits of the human visual system when we're trying to do visual analytics or try to communicate data to other people's brains? And just like Jen, John, this is not going to be as pugilistic a prompt as you're expecting. So Jen, my question for you is help
00:09:05
Speaker
We need you, you know, I think my career story is one of finding the joys of translationally inspired research and taking the questions of practitioners and using that to guide where we go and avoiding studying Petri dishes. And so my request is help us help you and the sorts of issues that you run into in the real world should be inspiring our research more.
00:09:28
Speaker
I actually love that initial prompt that you gave about paying attention to context and goals. I think that that's a fantastic direction that we should wait more heavily in the academic world.
00:09:39
Speaker
So that's a really great way to segue into sort of the first part of this, I think, Steve. So, Jen, I wanna ask, and this is really from earlier conversations I've had with both of you for today's session. So one thing that Steve really wants to know, which he sort of just alluded to is like, what should researchers be working on? I think one of the big challenges in data visualization we've seen in our last years is uncertainty. We saw especially during the presidential election, there was a lot of
00:10:04
Speaker
Rethinking, maybe, is that the right term? Rethinking how places were doing their estimations, their projections, and also visualizing them. 538 is a great example. But what are the main things that you think researchers should be working on? And Steve, you should feel free to interject and fill in the things that maybe we don't know exist, which will come later, how we're gonna break down some of these silos and get us all closer together.
00:10:31
Speaker
Yeah, so, so this first one might be a Petri dish option, but it's sort of a, I think an easier way to kind of get into the idea of what a practitioner like myself could use, and it's similar in theme to the uncertainty visualization conversations that have been happening.
00:10:50
Speaker
But I want to know if people are working on log scales at all and figuring out a better way to show logarithmic data. And maybe that's just because I work at a scientific kind of focused magazine. But do people even know how to read log scale graphics like scientists and non-scientists alike? And are there other ways that we can show that kind of data?
00:11:16
Speaker
One thing that we use that for is like star charts where it's like luminosity and size. So I'm just not sure if people understand what they're looking at. But as far as things that are kind of more related to that context, and I know there's so many variables in figuring out how to research this, but I'd love to know if and how graphics add value to a full article.
00:11:44
Speaker
Because we're rarely showing just a graphic by itself. It's usually couched in some sort of text. So do people spend more time with an article if there's a visualization included? We have some of those metrics with website analytics. But do they remember what they've read more vividly? Does it impact their impression of what they've read? And does the style of the graphic that's within that larger article impact any of those variables? So that's kind of
00:12:12
Speaker
the core of what a lot of my questions end up revolving around. I can take first shot at that. So for log scales, yes, there's some work in this. I put a link in the Discord into a blog post that Jeff Zox at WashU and I wrote last year because of the pandemic data, log scales suddenly became really important. If you show
00:12:34
Speaker
If you show the trajectory of covert infections as a linearly scaled graph people extrapolate linearly and don't realize that if it's going like this now it's going to go like this later, and of course translating that y axis to a log scale you can now do linear extrapolation.
00:12:50
Speaker
But no one really understands the log scale unless you're a scientist has been trained to use these things and you're used to them. So that article has some some suggestions and one of them is to give really concrete examples on the y axis if you're going to put 100 1000 10,000 and give people a sense of what that looks like. This is the number of people on the block. This is the number of people, you know, at a public swimming pool. This is your town.
00:13:12
Speaker
to link it more concretely to real world experiences. There's some other ideas in there, but I think that's probably the most productive one.
00:13:21
Speaker
But there's more research happening on that one actively. Similar to the other COVID-inspired research, that one got people re-interested in log scales because otherwise a lot of the research was from 20 years ago. For your second question on how does including visuals affect the way that people process information, a lot of that research comes from the education literature when you're putting diagrams into textbooks.
00:13:46
Speaker
And one of the surprising things that you find is if you put the diagram over on the side, and the text is here and the diagram is here, many students will not look at the diagram, which we as researchers and practitioners find insane because that's the first thing that we're going to look at because we know that we can powerfully extract information from that.
00:14:07
Speaker
And it turns out that learning to read the diagram as a skill and then it's extra hard when the text is separate from the diagram and you have to look back and forth and figure out what parts of the diagram.
00:14:19
Speaker
match with which parts of the text. So the prescription from the education literature is to interleave them, actually take that text and pop it into the diagram, which, Jen, the work that you art direct at Scientific American absolutely does. So you would rarely have all the text here and then the diagram. You're putting text boxes with arrows and stepping people through how to read the diagram and guiding them over time, which is exactly what that literature has discovered is so important.
00:14:49
Speaker
So to that point, Jen, a lot of the work that you all do at Scientific American is taking
00:14:56
Speaker
this pretty dense scientific research, distilling it down, improving or making even better graphics and then trying to integrate the story with the graph as well. So can you talk a little bit about that work and that process and how you think about taking what is maybe the more research literature where those things are kind of separate and bringing those two things together?
00:15:24
Speaker
Yeah, so as you implied, we do often start with data that's been pre analyzed and published in a peer review paper. So we're not necessarily doing this with investigative work we're doing. Okay, this conclusion was, you know, the scientists came to this conclusion and here is their supporting data. Sometimes our future articles are written by the scientists that actually did that work. And so we have a direct line to the content experts so we can get them on a call and
00:15:50
Speaker
and talk them through the graphics that appeared in their paper and kind of really get to the heart of what is the critical bit in here that should really be highlighted. Other cases, sometimes we're working with journalist authors, so we're taking a bunch of different pre-existing pieces and kind of putting them together, not in the same chart, obviously, but to kind of create a story.
00:16:15
Speaker
The first thing I'm doing is stripping out jargon, and that also means visual jargon, so like the symbols and chart forms that carry highly specific information within a specific context can be really efficient to communicate with others that are fluent in that language.
00:16:34
Speaker
But it's like a brick wall to outsiders. So a lot of my job just revolves around either knocking down those brick walls and kind of reinventing the visualization. So is there a different form that we can use that kind of gets rid of a lot of that visual jargon, or it's adding footholds into that wall.
00:16:51
Speaker
So those footholds can be like annotations, aesthetic refinements, changes in color palettes and symbols that just kind of help establish a visual hierarchy or just including really clear instructions for how to read the chart. But we are sort of approaching a visualization as if we're walking somebody through it one step at a time. And how can we do that?
00:17:17
Speaker
with guiding their attention with color or annotations to kind of take it one step at a time.
00:17:24
Speaker
So Steve, to that point of having graphs sort of integrated in the page itself, is there a reason why the visualization research community hasn't done what sounds like? I'm not familiar with the education research, but is there a reason why visualization specific researchers have not been, I mean, in my experience, it's like, here's a graph, we're exploring why or how people read this graph, but it's just,
00:17:52
Speaker
a graph right and you did you did some really interesting work on like the connected scatter plot but it's not like embedded within a larger piece so so is there a reason why the research community and database hasn't been exploring these broader
00:18:05
Speaker
merged pieces? Yeah, I'm assuming inertia. That's just the format. We have a visualization and then we have a caption under it and you're constantly looking up and down and looking up and down and some text over on the page and you're looking up and down and that's just the way that we typically do it and we keep doing it. But in some of my papers, I like to have a single figure with words in it that just explains the whole paper.
00:18:30
Speaker
And that actually is the first thing that I make and I realized that I think about my own paper in a different way once I do that because I can see everything holistically.
00:18:39
Speaker
Things are changing a bit. I know that when I read one of Matt Kay's papers, I know that they have the little graphs that show the distribution that create that meme that's being quoted in the paper. There's a little graph right there in line with the text and data comments, et cetera. There are initiatives to start to interleave language and visuals more effectively. And I'm excited to see those developments.
00:19:08
Speaker
Yeah, that's great. And if I might jump in, I am thrilled that there's progress on that front because it is really hard as a practitioner to be reading some of the literature and not seeing the guidelines being enacted by the people who are saying this is what you need to be doing. It's sort of a little show me. And so it's hard to take some of the guidance seriously if it's not being
00:19:30
Speaker
actively used. That said, I also understand that journals often have very strict publishing rules and protocols in place. So I'm really looking forward to when that really kind of takes off and we can actually start to see a lot of the advice in action.
00:19:48
Speaker
So I want to flip this initial question over to Steve. So we started with, for Jen, what should researchers be working on? So for Steve, what are two or three things that you wish all data viz practitioners knew or understood about cognitive neuroscience, cognitive science? What should we have in mind? What are the top three things that we should have in mind when we're making a graph or a dashboard or a longer piece, a longer article?
00:20:18
Speaker
I think that folks like Jen already do this, but I'll say for all practitioners, two things I'd call out would be the types of storytelling techniques that practitioner guides and books talk about. I think those are really important. And then critique would be the second thing. So for storytelling, everybody knows that your visual system is very powerful, 40% of your brain, et cetera. But that visual system is really good at locking into single perspectives.
00:20:47
Speaker
If you have multiple patterns you can see on something on the screen, you tend to lock into one. So imagine the duck rabbit figure, that ambiguous figure that illusion you're all familiar with. There's these great experiments from the 90s where if you show the duck rabbit and someone looks at it and they say it's a duck and then you take it away and you don't let them see the actual image again and then you tell them
00:21:08
Speaker
Think about it in your own head. Could you see anything else there? They go, no. Because the human brain is really good at locking into a single organization, a single perspective. But then you put it back on the screen and you say, is there anything else? And now people can reassess. So human brains are really good at doing that. And when you have a data visualization or a diagram, there's a series of perspectives that you have to take on it. You have to look at this difference, this trend, this statistic, et cetera. And it takes time to savor that.
00:21:38
Speaker
One of my favorite quotes on this topic is from one of my heroes in the graph comprehension literature, Preeti Shah. And it's reading a graph is not like looking at a picture. It's like reading a paragraph. And this is something that I'm trying to repeat in every public forum that I'm in, because I really believe that that's true. And I think she nailed it 15 years ago when she was writing papers on this topic. And it takes time to sort through those perspectives. Not for everything. So Jen has a nice framework of,
00:22:06
Speaker
representative illustrations, more pictorial graphics, and it's a dinosaur, it's a virus, you get it within a couple hundred milliseconds. But as soon as you have a diagram or a data visualization that's more than trivially simple, there's a series of perspectives that you have to take and it takes time. And there's many paths that you can take to do that. There's a series of interpretations. That paragraph has many ways that you could write those sentences and it can go in the wrong direction or the right direction. So those storytelling techniques
00:22:35
Speaker
highlighting, annotating will guide your readers to seeing the right patterns. And people don't always do that. And the reason that they don't do that is they have a bit of a curse of expertise. It's a duck rabbit. You know, you should see a duck. You see a duck. And you know what? People are really bad at realizing that other people see the rabbit. Human brains are terrible at
00:22:56
Speaker
taking the perspective of other people. We use our own experience to simulate what other people are seeing, and that leads us to assume that we're communicating a lot more than we are, and people see different things. So that's where critique, that second aspect comes in. Once you're an expert, you see the right pattern in the visualization. Maybe you think that your storytelling is good enough,
00:23:18
Speaker
Someone like me, maybe I'm decent at it. Someone like Jen does it and it's probably fine as it is, right? Because it's just so much experience in this. But in general, getting critique is critical. Put the visualization, the diagram in front of a group of other people and ask them what they see and whether they get it and collect hard data on whether they see that complex paragraph in the same way that you do. So I say storytelling and critique would be my two that I'd pull out.
00:23:47
Speaker
Yeah, if I could, if you don't mind if I jump in there. I love hearing you say that because I think, at least in the journalism world, or at least I should probably only speak for myself here, but we're very good at asking our colleagues for feedback as part of the process. But I think I personally need to get a lot better at trying to figure out how to ask my intended audience for that critique and that feedback, because my colleagues are coming at it from different points of view, but we also have a lot of shared
00:24:15
Speaker
You know things that we're looking for in an article like we've all read the draft of that manuscript already we can't get that out of our head, whereas yeah trying to figure out how to get the. A cold reader to critique something is I think something i've dismissed as well that's too hard because we're working on embargo or this or that, but it seems like it's a pretty critical thing that I need to figure out how to do.
00:24:38
Speaker
And that time aspect is hard to making the time to do it. I'm a little bit of a practitioner as well in that I teach sort of science communication and all my classes are called something like presenting your research or communicating your research or the undergrad one is sometimes show and tell. And in all of these cases, we talk about critique and invariably this question comes up of, well, I'll kind of show other people that know this topic because I'm making it the day before and
00:25:04
Speaker
And I run into this as well. And the advice that comes up in the room typically is pre-book a meeting with people that are outside of that group two weeks before to kind of commit yourself to it. And that's what I wind up having to do myself. I book a lab meeting for a research talk a week before the talk and I'll feel bad if I cancel it. And then that gives me a chance to test out the material. And then we'll try to make sure that we have some undergraduate students who are unfamiliar.
00:25:31
Speaker
with the work in the room so that we're not only getting advice from folks that know the area really well, but it's really tough to plan ahead to do that.
00:25:42
Speaker
And Steve, you mentioned in this process of critique, collecting hard data. So for folks, we already, Jen always mentioned like page views and time on page, but like when you're doing that little, I'd call it, it sounds like more like an informal focus group. Like what are the hard data elements that you're trying to collect? Oh yeah, I had a curse of expertise for that. I said it and that didn't make any sense. I mean, don't just imagine that people understand things,
00:26:09
Speaker
And don't even take their word when they nod because they're being too nice or they don't want to look like they didn't get it. And so if you have a culture where this doesn't feel mean to ask, if you can ask people, can you summarize what came out of that presentation? Can you tell me what
00:26:27
Speaker
what pattern you're supposed to be seeing in this graph is it totally clear to you and actually getting real data from their responses the way that we do an experiment instead of just an assessment from it from their perspective of whether they got it because humans are also really bad at that people think they understand things until they need to explain it or state it same thing happens to me as well when it comes time to teach or write the paper I realized that
00:26:51
Speaker
I didn't actually understand that topic as well as I did. So hard data means try to get them to regurgitate information and use that instead of their own assessment. So there's a question in one of the various chat windows that's relevant to this part. So before we switch gears, I want to get over that. And for folks who are watching, feel free to add your questions to any of the various ways that you can send in questions. And we'll talk for another 15, 20 minutes or so and then have plenty of time for Q&A.
00:27:21
Speaker
Laura put in a question about what do you think about designing visualizations that are targeted and meaningful for both experts and for a general audience? And I'll go to Jen first on this because I suspect you have this challenge all the time. You have sort of a general reader of Scientific American and then scientists who are reading or reading the magazine as well or the publication as well.
00:27:45
Speaker
Yeah, and often our author is a scientist, so it's also a bit of a trying to convince them that this way, although it's not the way they would have done it, is still valid for them and their colleagues, as well as our broader readership. This is one of my favorite challenges. It's kind of like the makeover challenge. It's like, OK, here's this data set. How can I make it over in a way that surprises and delights the specialist and helps them see things in a slightly different way, whether that's
00:28:13
Speaker
not necessarily seeing a pattern they had this before, but maybe, but also seeing something in an aesthetically kind of pleasing way as well or something that kind of triggers a different emotion or connection than just kind of that analytic part of their brain. So it's sort of one of my favorite challenges is how can we create something new from this data set or from this existing chart
00:28:40
Speaker
that delights and engages and provides information to a broad range of audiences. One of the ways I do that is by bringing in freelance designers that think outside of the box and have a reputation for doing that. I'm thinking to a wild bee piece that Moritz Steffaner did for us years ago that still kind of
00:29:04
Speaker
it makes me smile because it presented the data in a way that the scientists hadn't seen it before. And in a way that was just really kind of knotted to the topic behind the story. It wasn't just a chart that felt anonymous and disconnected from the content it was showing. It became kind of like that B, the hexagon pattern was kind of embedded in it and not in a gimmicky way, in a way that really kind of worked.
00:29:33
Speaker
So it's mostly just a challenge of trying to figure out how to honor that data, but kind of provide another connection, another way for people to connect to it.
00:29:44
Speaker
Yeah, just to follow up on that, I've been amazed at how the kinds of work that happens. My hero for this is Jen's work at Scientific American or the work that data journalists will do, where I would think that to show this data set, you'd have to reduce it down and only show the bar graph version. You couldn't show that complex network analysis. You can't show parallel coordinates.
00:30:06
Speaker
but these folks can do it. They step people through how to read these charts and they still leverage the power that these more advanced moves have, but they still step people through these explainers of how to read them. So I would typically think that I would need to distill it, but given the thoughtful design that goes into them, they're great at teaching people how to read them and therefore maintaining that extra power that those more advanced visualization types carry.
00:30:37
Speaker
And I know a lot of, sorry. No, go ahead, Jen. It's been said by many people, I know Nigel Holmes and Alberto Cairo have said this, but it's all about clarifying, not simplifying. And so kind of using that mantra and sort of trying to figure out, yeah, how can we clarify this really complex thing in a way that will surprise and delight the scientists as well? Because they're expecting that we're going to have to strip it down to its very basics, kind of as you alluded to.
00:31:05
Speaker
Yeah, so you're both doing a really nice job of helping with the segues from one section to another. So I want to turn to this idea we've already been talking about, about helping people understand how to read different types of data visualizations. So Steve, you started earlier that there's sort of like these basic graphs that we all sort of know and understand almost instinctually.
00:31:30
Speaker
Is there a research be so first I guess it's sort of a two part question. So first, like, what do you put in that little box of the graphs that we can basically assume everybody knows how to how to read and is there a research base for that other than just like just like kind of know that everybody just knows how to read a bar chart.
00:31:54
Speaker
I think I could, so the list is gonna be the things that you knew up through eighth grade and the bar charts, line charts, pie charts, stack bars, et cetera. The things that are not on that list are gonna be MECOs and connected scatter plots and then all the way up to fancier things like parallel coordinates. And I don't know of, there is some, I know that there are folks doing research on what,
00:32:19
Speaker
lay audiences tend to understand. I don't know if I can quote the folks that, Evan Peck does some great work along these lines, but I don't know if there's a paper that curates, if you are a typical member of the public, will you understand visualization X? That's actually, and then think about different populations. That'd be a great paper, but I don't know that anyone's curated something at that high of a level.
00:32:48
Speaker
So then, Jen, to you, how do you think about this from the publication, from the practitioner side? How do you think about this balance between graphs that we expect people to understand quickly and easily versus the Moritz graph is a great example that's going to take more time. People are going to have to investigate it. They're going to have to engage with it more than just saying, oh, that's a line chart. This line's going up. That line's going down.
00:33:12
Speaker
Well, in all cases, we have a chart title or not even just a chart title. We have a box title, introductory text, then the visualization. So we're already setting people up with a here's what you're looking for. Here's what's significant. So even if they are struggling to read something, they've already been primed much in the ways we're sort of saying, you know, here's the duck look for the duck. And then and then hopefully they can
00:33:38
Speaker
see the richer context with that full graphic, but ultimately we're trying to prime them for success. In terms of helping people with more kind of bespoke solutions, because sometimes we're running visualization solutions that, you know, don't even have necessarily a name. You know, you can't look up how to read it. We just really conversationally say, you know, here's how to read the graphic.
00:34:04
Speaker
Like, we're not doing a key or a legend that just shows the colors and the patterns, but really just say, you know, literally, right, every dot represents a star. The color of that star, you know, that dot represents this. And just as if you were reading, you know, walking somebody through it, like you were just, you know, telling your friend next to you, like, okay, here's the dot. That's what that means. And the color means this. And the distance means that. And just really in a conversational way in plain language, just kind of set people up for success that way.
00:34:35
Speaker
And I think that's a great rule. Great designers have an intuition for this, but typically something more complicated goes up on a PowerPoint slide and it's just, here you go. And the author just starts talking over it because of that curse of expertise. And I think adapting those same techniques for any and when anyone communicates data with an unfamiliar representation.
00:34:57
Speaker
Step through it one thing at a time to say the exit. Let's just show the X axis, you know, gray out everything else on your slide and actually just show that. And now let's just understand the Y axis. Now here's one point. Let's understand how that, how that works on both axes. And you know what the size varies too. And here's how to think about that. Now people are ready for more complexity. Now you can throw on more points or you can add in those other dimensions. But I love that technique of stepping things in.
00:35:22
Speaker
one element at a time. Let's talk about one variable or one way of visually representing variables at a time. And it's something that that curse of expertise typically prevents for presenters and authors. So Steve, on the research side, when researchers are testing these sorts of things, so they're testing whether people understand how to read a scatter plot or a connected scatter plot or any of these other graphs that we've been talking about, do you
00:35:51
Speaker
You feel like bringing people into the lab creates a sort of false community that that's not really how people are interacting with these visualizations out in the world I mean you mentioned, I'm asking this question because you mentioned Evan Peck, who has this somewhat famous paper where we actually went into farmers markets in central Pennsylvania actually like sat down with people and asked them, like, specifically, so I'm curious about
00:36:15
Speaker
The balance between we're bringing people into our lab who are undergraduates or graduate students, you know, in the university or we're using the mechanical Turk versus going out into the community and actually like sitting down with people.
00:36:29
Speaker
This is a great question. We typically divide this into two categories of study in the lab. One is going to be what can the visual system do if you know how to use it in the right way? So if you want to judge a correlation and you're using a scatterplot, you're going to be great at it. If you know what to do, if you do it with some other two bar graphs or parallel coordinates or something, you're going to be worse at it. And even if you really, really know what you're doing in both cases, you're going to be worse at it.
00:36:55
Speaker
So there you're studying the power and limits of the visual system and what it can compute. And in those cases, I wouldn't think there's going to be a lot of variability among people. If you do take the time to teach them how to read it, this is what human brains are capable of. And so that's one end of what we and others study. But then the other end is exactly where you're getting at this understanding question. Do people know how to turn the knobs in their visual system
00:37:20
Speaker
and know what patterns are relevant and know how to move through a sequence of views of the data to read that paragraph over time. And there's going to be huge individual differences in that. And in those cases, people tend for convenience to study crowd workers, mechanical turkers. And that's why that work by Evan Peck was so exciting that he broke out of that model.
00:37:42
Speaker
and actually worked with real folks, which is really important if you want to communicate science to the rest of the world. I'll turn that back to Jen, because Jen, you mentioned earlier how it's something that you all used to do, have focus groups, maybe a little bit more when we were sort of a print first world. If you had, and I know you don't, but if you had unlimited time and unlimited budget, how would you think about
00:38:10
Speaker
doing these sorts, this sort of, I would call it, I guess I would call it research, or how would you do this sort of research or focus groups now, especially that we're in this digital first world, which is, I would guess, and also COVID where everybody's isolated a little bit in their rooms in front of their computers. Like, how would you think about doing this on a practical level from Scientific American?
00:38:33
Speaker
Well, maybe this isn't a practical concept because of the time and the money involved, but I would love to have that feedback earlier in the process. Like I'm reading a manuscript and I think I know what needs to be visualized to help somebody give them more context. Like what point in this article would benefit from a graphic or from a data visualization or whatnot?
00:38:58
Speaker
So even starting from there is like, is my instincts correct on that front or is somebody who's a cold reader, saying well actually I think this other point is something that I want to see before I believe and take your word for it. So just kind of understanding if I'm, first of all, illustrating the correct things that are answering questions that people have. But then.
00:39:19
Speaker
you know, sometimes we're exploring different ways of solving that problem. And so at that stage, is there a sense of, oh yeah, this answers my question more clearly than that approach would. So there's a few steps along the way, but mostly I think it would be, if we were just waiting till the end and kind of doing a focus group piece on it, just asking questions like, first of all, did this graphic add to your experience here? Do you feel like you have a greater understanding because of it?
00:39:46
Speaker
And also, as you mentioned earlier, Steve, to have them actually summarize what they got from it, because we don't want a yes or no answer from that, just to kind of see if the goals aligned with what actually is being interpreted at the other end.
00:40:06
Speaker
Just briefly mentioned that I put in the discord to links to other to folks that are looking to build platforms where you can find more, more diverse audiences in terms of their graphical literacy levels there's a Catherine or rent because lab in the wild and then on the psych side there's test my brain.org.
00:40:24
Speaker
That one's meant to pull people in with the tease of getting some stats about your brain, but really it's a way of getting people engaged with that sort of research that don't typically do it. So those are at least efforts to try to do these kinds of things digitally. Nice. I'm also reading Sheila Pontus' book on field research. I think it's a field guide or can't remember the title off the top of my head. This is horrible. But she has some interesting ideas there that are helping me try to figure out how I can wedge this into my workflow.
00:40:55
Speaker
So on that note about practitioner limited time, limited budget as it were, I guess it's really a question to start with, Steve, but like for those sorts of practitioners, which I think is sort of most of us with limited time, limited budget, for the general practitioner, do you think it's more important for them to learn about
00:41:19
Speaker
broad cognitive science concepts, or should they watch for the latest data vis research and sort of best practices? Like where should the practitioner with the limited amount of time spend their effort in the data vis research community or field?
00:41:36
Speaker
I wouldn't think that it would be as if an effective use of time to go and try to read all the proceedings of this, given the amount of time that's available. There are one great thing about this field is that there are a lot of really smart people that write books and write blogs and make YouTube videos, etc that explain a john you're one of them.
00:41:56
Speaker
And actually, the person who's collected the best set of these, I think, is Jen. So I'm putting her link, bit,ly, what, why, when, how, into the Discord. So check that out. And Jen has collected a great set of resources on science communication that are focused on data visualization. And I looked over her list, and I don't have much to add to it. I think it's great. So there's a lot of great blogs in there that
00:42:22
Speaker
Ken will be able to, I'm seeing if I have a, here, here's a YouTube link to a talk of hers where she reviews a lot of this too. So just listen to Jen is my advice. I think we should get that made as a t-shirt and just have that. I do say that, I will say that, so IEEEB does do a nice job of having a guideline section at the end of a paper, right? I don't know if it's a formal requirement, but it is an expectation.
00:42:50
Speaker
that you will not only say what your results are, but you concretely say what this means for the real world. That is not something that happens in the psychology paper. In fact, if you put that in a peer psychology paper, it's not going to look good because it makes you seem less theoretical in some sense. Oh, that's applied research, which I think is absolutely silly. And there's also a practitioner statement that we need to write.
00:43:12
Speaker
that is a short paragraph that sums up for the practitioner what they should take from this research. Now, I still don't think that reading all the practitioners statements for the entire conference is going to be a good use of the time. I would go in a targeted way to do that. And I would start with these sorts of guides that are curated by folks that actually do serve as that bridge between the academic literature and practitioners.
00:43:36
Speaker
So I'm just gonna just let folks know. So we're about quarter to the top of the hour. We have about 15 minutes left for this discussion. If you have specific questions for Jen and or Steve, feel free to drop them in the Discord or in the Slido and I'll bring them up. But until I see more questions, I'll just keep asking my own questions because I just have more interest in this.
00:44:02
Speaker
So Steve, with those links, this is a question for really both of you. How do we bring the two branches of the, well, these two branches of the field, I don't want to say the two branches, these two branches of the field, how do we get them closer together? Is it a matter of practitioners reading, reviewing this list and reading blogs?
00:44:25
Speaker
and researchers reaching out to practitioners to involve them in their research practice. Like what are the sort of things, and this can be aspirational. It doesn't have to be, you know, you can have things that you've seen or that techniques you like or also aspirational, but how do you see a path forward to bring these two branches together? And whoever wants to start is totally, totally fine.
00:44:50
Speaker
I can start there. Events like this one, IEEE, I feel like, John, you've been chipping away at this for a while. I feel like in Chicago, I came to IEEE for the first time to be on a panel that you had organized, and it really opened my eyes to what was going on in the research field.
00:45:09
Speaker
And it's hard to make the space in your time, you know, in your schedule to attend something like this, unless you have a direct invitation. So I feel very fortunate that I was kind of pulled into it, kind of opened my eyes. Oh, so I feel like there's a few other events that are starting to do it more too, like Information Plus, I've found to be really useful to go through their talks, because that's another place in which researchers and practitioners are both presenting within the same context.
00:45:39
Speaker
And my answer will be active collaboration. We do a lot of this in our lab and it has absolutely changed my research life. Just to give an example, we have a new set of projects we're working on now. A new first year grad student just came into the lab, O'Shoon, and she's going to work on dynamic displays. So like the Hans Rosling display where it moves around. This has been a topic at this conference for a while now. And we sat down for our first meeting and started thinking,
00:46:06
Speaker
Well, what would be important here? I bet it's important if this happens, or I bet this is a limit, and we caught ourselves predicting what actual practitioners care about, and we said no, we better actually talk to practitioners. And the plan is that we're going to not do anything until we interview
00:46:22
Speaker
that person who makes educational diagrams that move that show a physics simulation of molecules or a data journalist who needs to have that scatterplot bouncing around in JavaScript somewhere on the internet. So the first stage is to
00:46:39
Speaker
actively interview and work with those folks and then keep them involved throughout the rest of the project to keep us on track to make sure that we don't wander off into those more convenient petri dishes that can be easier to deal with but then the problems don't become as interesting.
00:46:55
Speaker
So Steve, on that one thought, is what are your thoughts as to why, and I would put economics, my field into this, into the same group, like why do you, is it just inertia why researchers haven't been doing this more? I mean, I've been making a case most more recently that like more quantitative folks or folks who are trained in sort of quantitative methods,
00:47:16
Speaker
Well, I'll just, I'll put it this way. For me personally, I was training lots of quantitative methods, but never saw never took anything near a qualitative methods course but everybody I know who's like does qualitative research primarily has some quantitative training, they know how to clean the data set they know how to run at least a regression.
00:47:34
Speaker
Is it just that it's this inertia it's just the training that's been going on for for decades that that hasn't pushed people to having these conversations like what is it what does it take to move the camps together so just like more people like you to like saying yeah we need to we need to build these bridges, or is it is it something else something bigger.
00:47:54
Speaker
I don't know. I've personally been trying for a while ever since seeing the light myself to do more evangelism, but it's tough to do because the field doesn't expect it and there really isn't an incentive structure for it. You can publish things in that more ivory tower, petri dish model, at least on the psych side, pretty easily, and it's tough to get people to do that extra work.
00:48:16
Speaker
The granting agencies focusing more on work with real world implications is certainly helpful. And, and I don't know, maybe there needs to be more more critiquing happening within the fields and that's hard. I don't want to be mean. So, maybe, maybe less of an incentive structure of
00:48:36
Speaker
asking people to do it and more picking out when they don't. And I could say this is someone who wandered off into Petri dishes many times. My last year of grad school, I was studying little squares for the sake of studying little squares because someone else had done it before me and someone else had done it before them. And especially when you're just getting into a field, you tend to look at what the more senior folks have done and you tend to do that because that's the thing that you're supposed to do. And it's tougher to take that risk to go out into the field and find new
00:49:05
Speaker
new problems, and I had the luxury of doing that mostly post tenure. So that's, you know, there's all these constraints. I hesitate to psychoanalyze the field too much, but it is a tough problem. And so, Jen, you mentioned a couple of, when we first started, you mentioned a couple of particular challenges that you think would be good candidates for research, and you kind of want the answer to, you know, log scales, you mentioned is a big one.
00:49:34
Speaker
is there, do you have in mind like how other practitioners can seek out researchers to get the answers to those questions? I'm sure there are lots of folks out there who have similar questions and they might be very small things that maybe they don't think is worthy of research, but really as just one off the top of my head, like Robert Casara and Drew Scow did a couple of papers on like, how do we read pie charts? Because no one had actually ever done that study before. So like,
00:50:02
Speaker
Like from your side, how do you think practitioners can get those questions in front of researchers to get that research base? From my side? Yeah.
00:50:14
Speaker
Twitter? I don't know. I wish I know. Actually, coming in and engaging with people at events like this, like, you know, now I have a CS Direct here. And then at Information Plus a while ago, I met some more researchers who are also there. They want to know what questions to ask and what to study.
00:50:34
Speaker
So I think just kind of finding these opportunities to meet with folks and then sort of, you know, put a bug in their ear if they're looking for something, I'll give you some, you know, questions I have, they may or may not, you know, fit with your, you know, their ear area specialty or whatnot, but at least it gets a conversation going.
00:50:50
Speaker
So I think it's just a matter of starting to follow if a piece of research answers one of your questions, do some research on who that author is and what else are they working on. Like I check out people's websites, their academic websites, kind of see what other papers they've written. Every once in a while, one of these will kind of, a paper will hit the mainstream, like Michelle Borkins, like what makes a
00:51:17
Speaker
chart memorable. I feel like when something like that hits a broader audience, find out what else is she working on? What are her collaborators working on? Where did she present that piece? And what else are they doing? So I look for these little kind of windows that open up and then just try to dive in a little bit more.
00:51:33
Speaker
I like that the Twitter tends to work pretty well. Maybe there needs to be a hashtag declared like I triple E this speed dating or something like that, where you can, you know, have to have our practitioners and researchers meet up. I should say, by the way, my critique of the ivory tower arenas tends to be more from my cognitive psychology hat. I'd say the database field does care about qualitative methods. There are folks that do design studies and get into context with, particularly with scientists. You know, you could say names like Mariah Meyer, Tamara Munzner, Joe Wood, Jason Dykes. And these are all people that do
00:52:02
Speaker
in-depth contextual work with experts and then take the lessons from those studies and extrapolate them. So it's not that everybody gets stuck. It's just that the field in general probably does a bit and especially the cognitive field that is my birthplace. I'd say that one I could critique a little more strongly.
00:52:25
Speaker
In terms of these partnerships relationships between these two sides. Do you have any tips for how the two groups can work together, given that they have very different timeframes. I mean, Jen, you already mentioned that you have like
00:52:40
Speaker
You have to get the product out there and it's got to go. And Steve, you know, the academic timeline is a little bit longer most of the time. So any thoughts or tips on how to blend the time frames for these two different groups? Well, from my point of view, Scientific American is 175 years old and we've
00:53:04
Speaker
we, or it's a little older than that now, we say intend to repeat some of the same topics, you know, every three, five, 10 years. And so we have this steady march of graphics that have been done in different styles to different eras and in different ways. Cause like, oh, the way we approach visualization has changed as our audience has. So I feel like we have this, like this wonderful like archive of things that, you know, you want to see how neutrinos behave and how people illustrated that.
00:53:33
Speaker
you know, 15 years ago, 10 years ago, five years ago today. So I feel like in some cases diving into the archives of publications would allow for some natural variation and kind of different ways in which something has been presented over time. So maybe there's a way to do it, you know, pulling from the old in different ways for a similar audience.
00:53:57
Speaker
That, for example, is a very cool project. If I were a first year grad student diving into the archives and seeing how those similar ideas have been communicated in different ways over time and how sometimes the way that they're shown can cross domains of science and sometimes it can't because it's specific. And then maybe later doing some A-B testing and finding out which ones are best and why. So I think that's the perfect example of the kind of inspiration that folks in the field should be looking for.
00:54:28
Speaker
So we have basically two sort of more questions from viewers. One is on,
00:54:38
Speaker
getting information about the effectiveness of a visualization by analyzing the website. So, Jen you mentioned time on page and number of clicks, sort of like the metrics that we kind of all use. And I guess the question is really about developing other metrics and is there an appetite for doing that and then
00:54:59
Speaker
like what might those metrics be? I think that latter questions for both of you, but I think the first question is like, is there an appetite for better metrics specifically around data visualization that Jen could help you and your team do a better job, understand how people are using your content better?
00:55:19
Speaker
Yeah, speaking from a completely naive point of view in terms of technically how this could work, the eye tracking kind of thing is what order are people looking at these things and then being able to ask questions on comprehension afterwards and just in terms of did this change how you take home messages from the article itself.
00:55:42
Speaker
A lot of those are sort of pie in the sky ideas, though, because even if those tools can be made, they don't always play nicely with content management systems of different organizations. So even what might work for like the New York Times wouldn't necessarily work for Scientific American, etc.
00:55:58
Speaker
So I think it's hard because, you know, even if this tool exists and people say, oh, you should use this. It's like, well, yeah, I can't. It won't play nice with the rest of the pieces in this. But in theory, I would love to know how people are actually reading through things and then be able to ask them questions afterwards.
00:56:16
Speaker
I think that sums it up nicely. At the moment, you can look at engagement. Do they click? How long do they stay? These are all things that are tractable for web interfaces. As soon as you want to eye track, you got to either bring people to the lab or get permission to turn on their webcam, which a lot of people are not going to be game for. And then you can A-B test by if you can get the content management system to randomize either version A or version B, you can see how that affects engagement and read time. But that's a technical hurdle to get around.
00:56:45
Speaker
And then finally, if you really want to find out what they understood and what they didn't, having questions, qualitative text boxes, multiple choice questions, et cetera, would be great. But is the average, you know, is the reader going to do that and take the time? And if so, what's the biased sample of readers that you're getting that is willing to do that, et cetera? It's tougher to do this. So these things are possible. We had a project that we were tinkering with with the
00:57:11
Speaker
city of Chicago's data office for a while, where they wanted to explain machine learning models, where if the beach is closed today, we don't actually know that the bacteria level is too high. We have a model that suggests that it is, and people are mad because they can't go to the beach. And they really want to explain that super clearly. So we were going to test different ways of explaining these basic models to people. And that was a place where the infrastructure was available to be able to A-B.
00:57:39
Speaker
and you could see whether people make it through the page. But again, are people going to answer questions and say, are you satisfied with this explanation? And then only some do, and you run into biased samples, so it gets tougher. That's where the lab-based research can be handy if you can properly model the context of the original person looking, right? Is the mechanical turker that you're bringing in to read this explanation really have the same perspective as the parent who's mad that they can't bring their kid to the beach?
00:58:07
Speaker
maybe. So that's a place where having both sides is the quant and the qual, the lab and the context is the only combination in the end that I think is possible for many of these problems.
00:58:20
Speaker
Yeah, we're almost out of time. So I want to close up by having you each tell people where they can get a hold of you in this idea of let's bring the two groups together. So practitioners who have ideas for research need things solved, researchers who have ideas for Gen. So we'll go with Steve first. So Steve, what's the best way for folks to get in touch with you so that they can
00:58:47
Speaker
pitch their ideas to you and you could go off and solve their problems.
00:58:52
Speaker
I'd say my email is a good one. It's just my last name at Northwestern or Gmail. But even better is to use the Twitter. So just take off the end, at Steve Frankeneri. And that is more fun because it gets the rest of the data visualization community involved. Well, someone will respond. We'll create a conversation. It gets me more people involved. So that's the one I'd really suggest. One other thing to note that you might be interested in. I'm going to put this in the Discord as well. There's a journal called Perspectives
00:59:21
Speaker
Excuse me, psychological science in the public interest and I and Jessica Holman, Priti Shah, Jeff Docks and Lace Padilla have a review paper of psych work
00:59:34
Speaker
on data visualization. And we cover synthesized work from data visualization, graph comprehension, cognitive science, et cetera. And hopefully that might be of interest to folks to get them at least queued up first on what's known from the psych side so that they have a good baseline to be able to ask questions about things that are yet unknown.
00:59:57
Speaker
Terrific. I have actually seen that paper, so you should look forward to it. It is quite a good, I would say, a good intro to this whole field that gives you a really deep look. Jen, best place to get a hold of you for those researchers who have ideas on how to do these tests or other things or things they want to pitch you for Scientific American?
01:00:18
Speaker
Sure, a great example of the divide between practitioner and researcher right now is that I don't think I'm actively working with Discord, right? I don't know how y'all do it with these conferences with like five different ways of doing this. So I don't think it actually worked, but you can find me on my website is just genchristiansen.com, my name, without the space. On Twitter, christiansengen, no space.
01:00:43
Speaker
Those are great ways to get a hold of me, and I am accepting pitches for graphic science pages, so if you have ideas on that, maybe we can get some visualization research onto that page. I should probably have done that already, maybe have done a bit, but so that would be a great way to help give a megaphone to some of the visualization research going on out there.
01:01:03
Speaker
Terrific. Thanks to you both, Jen Christiansen, Steve Frankeneri. Thanks so much for doing this, having this discussion, breaking down some of these walls. And thanks everybody for attending and tuning in. We're looking forward to the rest of this column and I'm going to hand it back over to my co-organizers and we will be back with our next full session. Thanks again.
01:01:25
Speaker
And thanks to everyone for tuning in to this week's episode of the podcast. I hope you'll check out some of the resources and references that were included in that conversation. I've listed them all out in the episode notes to this show. If you would like to support the show, please share it with your friends, family, neighbors, anyone who you think would be interested in a data visualization podcast.
01:01:45
Speaker
You can share all the links on your social networks. If you would like to support the show financially, head over to my Patreon page. I've got new goodies ready to send out to you. You can also provide a one-time donation using my PayPal account. All this is linked on the show notes page. So once again, thanks for tuning in to this week's episode of the podcast. Until next time, this has been the policy of this podcast. Thanks so much for listening.
01:02:12
Speaker
A number of people help bring you the policy of this podcast. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Satsuki-Ramirez. And each episode is transcribed by Jenny Transcription Services. If you'd like to help support the podcast, please share it and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcasts.
01:02:33
Speaker
The Policy Vis podcast is ad-free and supported by listeners. If you'd like to help support the show financially, please visit our PayPal page or our Patreon page at patreon.com slash policyvis.