Climate Change and Land Management: Social Network Analysis
Articles Blog

Climate Change and Land Management: Social Network Analysis


Ashley Fortune Isham: Good afternoon from
the U.S. Fish and Wildlife Service’s National Conservation Training Center in Shepherdstown,
West Virginia. My name is Ashley Fortune Isham and I would
like to welcome you to our webinar series, held in partnership with the U.S. Geological
Survey’s National Climate Change and Wildlife Science Center in Reston, Virginia. The NCCWSC Climate Change Science and Management
webinar series highlights their sponsored science projects related to climate change
impacts and adaptation and aims to increase awareness and inform participants, like you,
about potential and predicted climate change impacts on fish and wildlife. I’d like to introduce our speaker today. Dr.
Mark Schwartz is a conservation biologist with research interest in climate change impacts
on rare and endangered plant species, climate change adaptation through resource management
and decision making for research managers under uncertainty.
He’s a professor in the Environmental Science and Policy at UC Davis and the director of
the John Muer Institute of Environment. Dr. Schwartz is also a UC Davis coPI for the Southwest
Climate Science Center. Mark, welcome. Mark Schwartz: Thank you, Ashley. Thanks for
inviting me to do this webinar today, with you. As I get started, I want to thank a large
number of people who’ve helped out with this project. That we had a lot of help from collaborators
at the Fish and Wildlife Service, National Park Service, Bureau of Land Management, Forest
Service and others. We had funding from the USGS and the Southwest
Climate Science Center to do this work. This is really a combination of an effort from
myself and Mark Lubell, who is also in the Environmental Science and Policy Department,
who works on social network analysis. And Casey Peters and Carlos Barahona, who are
graduate students, technicians and programmers who do all the heavy lifting on trying to
sort out all the numbers that we have. To start off with I’d like to address the
questions that we’re trying to answer. This has come out of a series of conversations
with the Climate Science Center. We’re trying to understand how natural resource
managers and policymakers decide what they’re going to do in order to try and manage for
resilience in natural ecosystems, given climate change. To do this we needed to know what these people
are worried about, with whom they’re connected in order to get information to make decisions
differently than they did in the past, and how do they want to access information? What
kinds of information go into natural resource management decisions? How will they use that
information in decisions? We think of this as different than a needs
assessment. A needs assessment is asking a bunch of people, what do they want? We want to know more than just what information
they want. We want to try and understand who they get that information from, how they convey
what information needs they have to a research community that might be able to develop information
for them. Once they get it, how do they use it, exactly?
We found that generally, if you said, “If we made a map of this, or a map of that, would
that be helpful?” The answer to that question is always, “Yes.” But, the use of that information
is subsequently spotty. We’re trying to get down a little bit more deep into this question
and say, “Well, exactly how are you going to use that information?” To start off we want to think about how people
are connected to one another. Why network analysis? Network analysis and the environmental
sciences is coming online quite rapidly. Fundamentally, it’s just asking a very simple set of questions. One is, who’s talking to whom? Over in the
left, you can see a diagram with arrows and a bunch of names. That some people are more
connected, some people are less connected. Some people are central to the network and
some people aren’t in the network at all. Over on the right is, which way is information
flowing? Are people giving information or receiving information from different people,
differentially? One thing I might note about these pictures
here, is that there’s also a certain sensitivity in this information in that, for example,
we wouldn’t want to be calling out Rebecca as being unattached to any information networks
here. The presentation that I’ll be giving is going
to be looking at these things at a fairly high level. We don’t want to be pointing fingers
at anybody about whether they are or aren’t connected. That’s not our intent. Our intent
is to more think about it from a structural way of what kinds of resource managers and
what kind of people, in which agencies, are connected to whom and how. We can start with the conceptual model that
looks something like this, where down in the lower right you have an agency field manager.
We know that these agency field managers are talking to one another. That’s that brown
arrow. We know that they’re getting information from
their regional and national headquarters, the other brown arrow. We presume that those
regional and national agencies are getting information from a variety of research centers. We also believe that there are agency and
field managers who are getting information directly from that research community. That
research community could be federal scientists at places like the USGS, university scientists,
NGOs or at places like the Weather Services. We’re trying to look at how these different
people are connected in different ways. As I mentioned, the social network analysis
is our primary tool, there’s a broad expansion of this in the literature now. This graph
and time across the xaxis, the number of publications on the yaxis, these are a number of publications
that use the term “social network” along with natural resources, management, or conservation,
or fisheries. You see that it’s a literature that has grown
quite rapidly over the last couple of years. This is because there’s a couple of really
great examples now of how understanding a complex resource management network has allowed
resource managers to operate in different ways and solve some problems. One of the classic cases that I think about
has been relatively unregulated fishing in international waters. Particularly, around
Antarctica, where there are NGOs, and governments and fishermen, who all have an interest in
seeing better compliance to rules that are out there. Yet there are fishermen who weren’t
complying. Using the agencies and the NGOs at loggerheads
with the fishermen wasn’t working very well. Once you understood how people were connected
and talking to one another, and bringing the fishermen who were interested in seeing compliance
into the picture, it started to work a lot better. It was through a social network analysis
that they were able to do that. What did we do? We were looking at five major
questions. What are resource managers worried about? We want to understand the attitudes
towards climate change, and how they connect those to the resources that they’re managing.
We want to know with whom they’re connected in order to get information. We want to understand
how they want to access that information. What information is valuable to them? Then, how they’ll use that information in
decisions. That’s in a little bit of these arrows here, because we haven’t fully answered
that question yet. We’re really addressing that question through a different process.
I’ll be talking a bit about that right at the very end. We also want to develop a baseline for understanding
the role of the Climate Science Centers and the Landscape Conservation Cooperatives in
climate information. I’ll talk about that a bit as well. We started out with a survey. We had 25 questions
in the survey focused on six areas professional background, opinions on climate change issues
in public lands, personal views on climate change, involvement in adaptation planning,
scientific understanding of climate change, and then the network connections, who they’re
giving information to, who they’re receiving information from. The study area for our project was the Southwestern
Climate Science Center and the North Central Climate Science Center Footprints. Although,
once you started asking people who they talked to, it diffuses out from those footprints
proper. We focused our survey on workers in four agencies
the Forest Service, Fish and Wildlife Service, National Park Service and the Bureau of Land
Management. This worked with an initial survey, where
we asked regional leads from each of those four agencies to distribute the survey to
their staff, who have resource management responsibilities that would have a climate
change impact. Then, we asked those people who filled out
the survey to identify other people that they’ve talked to, that we might followup and do a
snowball survey. We gained additional participants through that snowball. We ended up getting 590 management and research
units, 313 of them were federal. We had named connections from more than 20 agencies. That
map of the things that we had sampled looked something like this, where you have lots of
little stippling up in the Dakotas. Up in the upper right there on that map, these are
mostly refuges, Fish and Wildlife Service Refuges. Then, of course, there’s lots and
lots of public lands in the Southwest in the north central regions. So that you get National Park Service in the
blue, the Forest Service is in green. That, I guess it looks more peachy to me than orange,
is the BLM. The tiny dots are mostly Fish and Wildlife Refuges. There’s places in the
West where the Bureau of Land Management owns land in a big checkerboard. It just turns
black on this map, so that’s mostly what this black is.
If you look down in the lower left, you see this is the number of people we had in each
of those agencies that filled out the survey 763 total survey participants. We had really great participation from the
Bureau of Land Management. The Fish and Wildlife Service is a smaller agency. The National
Park Service, we felt we had pretty good participation from people in that agency. The Forest Service is a big agency. We didn’t
do quite as well in sampling that group, but we still had 115 people filling out the survey.
We had 34 people from universities. I’ll talk a bit about that as we go through. We can also identify things as nodes. A node
is a combination of a physical location, an address where somebody works, and a job title.
Often, a node is a person, but if a particular regional office of the BLM has two range plant
managers, that would still be just one node. You can have more than one participant as
a node, and nodes can include contacts that are not survey participants. We have a total
of 1,452 nodes, so this is a reflection of how many people were pointed to that they
themselves didn’t take the survey. It was quite a large number. I forgot to fill in the number of contacts
on this map here, but this just describes where those people are geographically. You
see that there’s heavy representation in the states that are in the Southwest Climate Science
Center, which are California, Nevada, Utah, Arizona, parts of Colorado and New Mexico. The North Central Climate Science Center,
which includes Idaho, Montana, Colorado, Wyoming and the Dakotas and parts of Nebraska. The first thing that we wanted to look at
is what are resource managers worried about, and then to understand the attitudes toward
climate change among resource managers. The first thing we asked is, how well informed
do you think you are, with respect to climate change? Here, of course, we get a completely
nonsurprising answer that university people always feel like they’re completely wellinformed
about everything. That’s this purple bar. Jokes on webinars
don’t go very well, since you can’t hear any of the audience participation in that humor. What we find is the BLM feels a little bit
less informed than the Fish and Wildlife Service. Then, the National Park Service, et cetera.
From the left to the right on that graph. We also asked about views on climate change.
Here we have a series of graphs, and the labels on the bottom are very small. The labels on
the vertical graphs are also very small, but they just sum to 100 percent of the responses. What I did for the upper left here, is to
identify these groups by their bars. Again, I’ll just define what the question was and
what the answer was. The upper left graph shows what was the perception
of the scientific understanding of climate change, and whether people think that most
scientists agree that climate change is happening. Again, we see that the Bureau of Land Management
is a little less certain about whether climate change is actually happening. By and large,
there’s a fairly strong belief that climate change is happening and it’s shown in that
upper left graph. The upper right graph is showing whether it’s
happening now. Again, most respondents feel that it is happening now, with a little bit
lower response rate from that on the Bureau of Land Management. In the bottom graph is, what do you view is
causing climate change? Where the red bars represent the answer that climate change is
being caused by human activities. The green being that it’s caused by natural changes
in the environment, and not so much by human causes. Then, the blue is people who think
that there’s no climate change and so it’s a nonsensical question to answer. Again, we see a little bit lower belief in
human driver of climate change in the Bureau of Land Management than the other agencies. I should note that there’s no responses from
Forest Service in here. The Forest Service didn’t want us to be asking these questions
of their staff, so we excised those questions from the Forest Service respondents. Then, we asked a series of questions that
are basically going to rank the people’s views on climate change. This is an example question,
down the lower right part of the slide. “Given your current knowledge about the following
potential climate change impacts, to what extent do you expect climate change will make
it easier or harder to meet your management goals?”
This going to be something on physical responses. They can click anything from significantly
easier to significantly harder, or that they don’t know. We’re going to get a series of
answers that are distributed across that spectrum. We asked these questions over physical impacts
of climate change on management things like water, temperature, pollution, fire and biological
impacts of climate change on management things like shifting distribution of species or invasion.
Then, vulnerability of cultural resources and also the vulnerability of those physical
and biological resources. I’m going to show you a series of slides and
they have a similar format. I’m just going to run through that format, right here. The upper left panel will always be the Bureau
of Land Management. Upper right, Fish and Wildlife Service. Lower left, National Park
Service and lower right, Forest Service. You can see those in little letters in the gray
bars there. Red is always going to be that it’s significantly
harder. With the orange peachy color being somewhat harder, and the blue color as being
significantly easier. White being a neutral color. The different bars represent the different
specific subquestions in each of those areas. Here’s that graph again, without those symbols
over them. We see here, that this is looking at physical impacts of climate change. We ordered the questions from top to bottom,
from being the most red to the least red. What you find is that everybody is generally
worried about things like change in drought risk and fire risk, change in rainfall amounts,
and that across the agencies we see significantly less concern about change in sea level rise. That’s probably not so much a measure of the
low importance of sea level rise, but the small fraction of people who have management
responsibilities along the coastline and, therefore, have to deal with it. Lower risk
associated with air pollution and water pollution, as well. In thinking about biological impacts of climate
change, I just colored all of these red. Because all four agencies, across all of these different
attributes of biological impact seemed pretty darn concerned about it. There’s not a lot
of variation across that spectrum. Everything from changes in risk of invasive and noxious
species, down to changes in resource productivity, people are worried and think that it’s going
to be significantly harder to manage their resources as a consequence of climate change. We then asked them about the biological resource
vulnerability at a high scale from marine biodiversity, aquatic biodiversity, plant
population, terrestrial biodiversity and animal population. Again, fairly high and fairly
consistent feeling that there’s a large amount of vulnerability in the resources that people
are out there managing. People are very concerned about this. This
is an issue that is on their mind. Considerably less so, with respect of these
cultural resources. Here, we see a lot more blue and white in the bars. These questions
are things like everything from recreation, visitor services and modern infrastructure
down in the blues, to being things like the applicability of traditional ecological knowledge
and management techniques. That’s makes a perfect set. I would classify
under the applicability of traditional ecological knowledge of this idea that we want to use
historical range of variation as a target of which we try to manage our ecosystems toward.
Then, if we have a changing climate that maybe doesn’t make so much sense anymore, so people
should be more concerned about that than other things. They are looking at resource values. Then,
many people are also concerned about resource values uniformly across the board. Water resources,
forage resources for grazing, forest and timber resources these are all things that are on
the minds of the various agencies. There’s a little bit of variation across these
questions and across agencies, but it seems more that the large takeaway message from
this is that people are very concerned about most all of their resources. We can then take and slice those attitudes
in a variety of different ways, and one of the ways that we did this is just looking
at these things geographically. If you look from the West Coast in the cherryred
California ecosystem, the blue is the Sierras and that looks at and is the Northwestern
forests and the montane regions; the purplely colors and the reddish colors are the deserts
and the Southern montane regions over to the deserts. We’ve summarized the attitudes of people across
those four regions that capture the large fraction of our study region. We can summarize their attitudes as well,
with the desert being up in the upper lefthand corner, the Great Plains in the upper righthand
corner, the Mediterranean California down in the lower left and the Northwestern forest
and mountains down in the lower right. This is again across different resources, and we
get perfectly sensible answers coming out of this. If you look at the rightmost bar in each of
those graphs, people in the desert, Great Plains, forest and mountains are not so worried
about changes in sealevel rise. People in Mediterranean California, considerably more
so. That’s the only part of the coast that’s in our sample area, so it makes sense that
they are the ones that are worried about that. Again, if you look at the rank order of these
measures of physical vulnerability, that changes to drought risk, fire risk and annual rainfall
amounts, that water in the West is a very, very large concern for a lot of people in
the resource management area. That’s true across all of these different groups of people. Again, here’s looking at biological impacts.
Not a lot of variation in this by agencies. Not a lot of variation in this with respect
to where you are in the region. People are generally concerned about biological impacts
of climate change and the ability to meet management objectives as a consequence. There’s a couple of other ones that I’m not
going to go into in too terribly much detail. If you want to come back and go to this webinar
not live and pause and squint at these bars, you’re welcome to. I encourage you all to
do that and give us some questions about them as they occur to you. We can reparcel these attitudes in a variety
of different ways to look at how people are concerned about resource management issues,
and we have done so both by agency and by region for the purpose of this webinar. OK, so we then asked, “Well, what kind of
information is useful to people?” One of the ways we can ask that is the format of useful
data. Green here represents very useful, yellow somewhat useful, red not very useful. Down
at the bottom, data in raw format, not very useful to respondents. More than half the
people in the survey said that’s not useful for them.
Only slightly better than that, orally summarized data. Then, on number eight, through fourth
from the left, video and slide show, so I guess that means this webinar not very useful
but there you go. No, again another joke on webinars – don’t do that. Starting on the left, the things like visually
summarized data, maps and geospatial illustrations, written summarized data, these are all useful
sorts of things. Peerreviewed research literature, of course, because it’s making them meet the
documentation, having a peerreviewed source to point to is a very helpful thing indeed. We can examine the way that information is
useful for people. We can also look at this with respect to what is actually being used,
types of models or data being used. Here I think this is a very interesting response
in that what you get is, across the board, top to bottom, from vulnerability assessments
down to global climate models, that you find people using them and where the green here
is that they currently are using them. The light green is that people are planning to
use them, and yellow is that they may use this information. You’re seeing across all agencies people who
are currently using a wide variety of different kinds of tools to help them make management
decisions. That’s I think a very encouraging kind of response that we get out of this survey,
even down to the point of using global climate models. I’ve always thought that really, for a resource
manager to make use of that information in the decision, that it’s helpful to have that
global climate model interpreted with respect to some resource, but here you see people
already using these pieces of information in their decision processes. OK, so the next thing we want to do is to
talk about with whom are people connected in order to get information? This is where
we get to that network analysis that I promised and that there is the network that we have.
This is a very, very difficult thing to look at. What you see is a lot of things in the
center that are wellconnected to a variety of different agencies or different groups. Then, of course, there are people out there
that are connected to one or two other people, or are not connected to this network at all,
as sampled by the questionnaire. There are a lot of different entities that are captured
here. The color scales all bleed together. It’s not a very helpful picture to look at
in some ways. We can summarize this by agency or entity
group. We see that this comes out a little bit differently in this regard and that not
surprisingly the Forest Service, BLM and Park Service are very central to the connectivity
among groups here, but we also targeted our survey at them. The Fish and Wildlife Service is a little
bit further out. Well, that’s also sensible in that it’s a smaller agency, and we had
fewer connections from those people. What we want to do then is to think about some
metrics that we would look at, in terms of social network. There’s something called in-degree. How many
people are pointing in towards you as a source or a recipient of information? Out-degree
is, what people are you pointing at, in terms of the source or a recipient of information?
Centrality is how many different connections that you have. This is a table from Prell et al. in 2009
that looks at these ideas of concepts that are relevant for natural resource management,
that strong ties are good for communicating about and working with complex information
and to hold and maintain trust between actors. These actors are more likely to influence
one another’s thoughts and views when we have these strong ties, but weak ties can be useful
as well. They tend to bridge across diverse actors and groups and bring in new ideas and
things like this, so we can look at this in a variety of different ways. I’m going to go into this relatively lightly
here today, as I said, just keeping it on a very high level. This is a very simple summary
of directionality of people’s connections. This is, the columns here, the betweenness
is basically the average number or metric that reflects the connectedness of people
who responded to the survey from these different agencies. I’ve colored the four focal agencies
in green and then also colored the university participants that are at universities, that
are in the climate science centers. Both the Southwest and the North Central are
blue, and the USGS is the other line in blue. We lumped everybody else the university people,
NOAA, NGOs all those other 20 or so organizations into this other category. What we see from this is that, in terms of
betweenness, the Climate Science Center folks and the USGS folks are far, far ahead of this
other suite of people, both in terms of receiving information and providing information. That’s
a good thing. That means that we think that, even though this survey took place really
early on in the Climate Science Centers, they’re wellconnected. I guess I would probably interpret that as
being more that the USGS did a good job of picking universities to host the Climate Science
Centers. Rather than that the Climate Science Centers have positioned themselves well, because
it’s really too early for them to have produced very much that would be of value as a Climate
Science Center during that survey. The other things that we see is that the National
Park Service, which is I think taking on this job of climate change adaptation very seriously
and really trying to think about how to change their management as a consequence of climate
change, are both big recipients of information and providers of information and have a lot
of connectivity. BLM came out looking very good as well. Forest
Service a little bit less so, and it’s not clear to us now in looking at these numbers
the degree to which that is an attribute of the Forest Service or whether that’s an attribute
of the fact that we did not saturate as well as we did the other organizations, in terms
of the Forest Service. We really had a different process for getting
the surveys out to people in the Forest Service, and that became a bit of a challenge. We’ll
talk about that right at the very end here. Another thing that we have is that connectedness
is not really related to a risk or vulnerability perception. This is a model analysis of those
things that link connectedness to people’s perception, a couple of the questions in the
survey. They’re just summarized down here in this anova table, the group called A, the
four bottomlines. If you look inside the box, those are all
P values that are well above 0.5. They’re above 0.3, and that just really suggests that
there’s no relationship between a person’s connectedness and their perception of overall
risk. However, there are some things that do predict
connectedness, and that is participation in planning exercises, information demand and
being informed. There was another set of questions on what
people’s information demand was, how much they had participated in activities. The more
people participated in activities, the more connected they are to a community that deals
with climate change. This is one of the questions that we asked
early on about attitudes. How important is climate change adaptation planning? You can
see the blue is extremely important to somewhat important, the red being not important. You can see variation across these agencies
where the Park Service feels it’s more important to be engaged in this planning process than,
say, the BLM does. OK, so the other thing that we could do is
look at this idea of mapping risk perception. We can map connectivity by geography, and
we can map attitudes by geography. We’ve done a little bit of both those things. This is
kind of a complicated graph, but I’ll draw your attention just to the lower righthand
corner. This is a variable called “climate velocity”
that was defined by Scott Loarie about five years ago. It really reflects how far or how
fast a species might have to change or move its distribution or evolve in order to maintain
its position in the landscape. In other words, where climate may be changing
but there’s a montane environment, such that staying in the same climatic envelope only
means moving up slope a little bit, that velocity is relatively low. Hence, the Sierras turn
into blue in this graph. If you have to go a long way across relatively
homogeneous environments to get to a new environment and change climates, then that would be a
high climate velocity. That’s why the Central Valley of California looks red there. We asked whether climate velocity because
we have a map of climate velocity for all of North America is related to climate risk
as measured by the average score of how much harder a manager thinks management will be. This is a graph that just simply shows there’s
no relationship there. People are worried about their resources, no matter what those
sorts of attributes are of potential for climate change. I guess that suggests that the vulnerability
of resources may be tuned to those environments and that they’re still of concern to people. OK, so what are the lessons that we learned
from this exercise? Well, we had the regional offices pushing our survey out and there are
a lot of things that were really good about that. The response rates are pretty good among
places where regional officers asked for people to participate. I don’t think we would have
gotten quite as a high response if we didn’t have those regional officers suggesting that
it was a good thing to do. There are also some constraints that slowed
us down considerably. It took a long time to get some of these surveys out to people
as a consequence of that filter. The Forest Service, for example, put some filters on
questions that were not put on by other agencies, so we don’t have exactly the same questions
asked for everybody. It constrained the number of people who actually,
in the end, saw the survey. There were some drawbacks to having this survey sent out through
regional offices as well. What did we learn from the data themselves?
We learned that federal resource managers are generally informed and genuinely concerned.
The managers are able to distinguish among resources at risk, but they don’t tend to
scale them relative to the geographical variation of that risk. It’s easier to assess from the ground up,
who do field people talk to, rather than from the top down, who do the researchers produce
information and who do they engage with. That’s the process that we took. The network is massive. There’s a lot of people
out there making resource management decisions and that makes assessing those attitudes very
difficult because it turns into a very big data stream. One of the things that we learned in our survey
was that it’s important to define a time frame when asking people about their perceptions
of climate change. Some people felt that they have a very different attitude about thinking
about climate change impacts over 20 years versus 100 years. We didn’t specify that in
our questionnaire, and we should have. Finally, we learned that connecting activities,
training sessions, planning, et cetera, heightens both the connections of people to one another
and awareness of issues that come up. We need to be working on things that build
those connecting activities. We have a long way to go to build an effective bidirectional
climate information stream with the researchers. Resource managers talk to each other quite
easily. Researchers talk to each other quite readily. Getting those places where researchers
and resource managers are in the room, defining problems and working together to solve and
get answers, is the much more difficult thing. I wanted to go on and spend just a few minutes
then on a project that we’re really doing as a follow on to this climate network project.
This one is funded by the Southwest Climate Science Center. It’s a collaboration with
Matt Williamson, Christine Albano and myself, all working just in the Southwest now. Our goal here is to assess the landscape of
climaterelevant resource management decisions. What are people making decisions on, and how
are they making those decisions? Our objective is to develop decision support
tools, knowledge networks and climate science targeted specifically to meet the needs of
managers and decisionmakers. How are we doing this? We’ve taken two overarching approaches. One is a content analysis. We’re using the
federal records of decisions, things like the Federal Register and looking at words
that are used in the Federal Register, and from that, we’re trying to classify decision
types. How many different decision types are there?
Then, what are the frequencies of these resources that have decisions, and what kinds of decisions
do they have? Then, a second set of activities go about
getting information from individuals themselves. We’ve convened a series of consultative groups
in the process of developing a web survey to ask the question of, What information streams
are being used? How are they being used? What do managers say that they will use? and How
will they use that information in those decisions? This slide here is kind of a complicated one,
but it captures the problem, if you bear with me for just a minute. Across the X axis, or
the horizontal axis, we have different resource types like managing people, managing aquatic
environments, managing terrestrial environments and managing the estate, things like mining
permits or purchasing land. From the bottom of the graph to the top, we
have some kind of scope of a decision, so we have decisions that are very broad across
all of those different attributes people, aquatic, terrestrial, estate that are allencompassing
for resource managers. Things like land management plans and forest plans, and things like this. As you go down on this scale, you find things
that are more linked to actions and are more specific to the kinds of resources that they
impact. Down on the lower left here, you have a yellow box called “Trail maintenance,” where
that’s really some combination of working with the estate and people. We could be having “Manage fire” as something
to do with the terrestrial management of the ecosystem so that we have decisions at different
breadths of scales of these different resources and at different levels, in terms of the time
and the area that they impact. Characterizing these decisions is one of our
challenges that we’re trying to work on, both with these consultative groups and with the
content analysis of federal decisions. We’ve had four consultative groups Sacramento,
Reno, Salt Lake City and Tuscon. In these, we’ve been looking at what are these people
concerned about with respect to resource management. Some of the takeaway messages that we’ve heard
from these groups, that the administration’s current emphasis on mitigation is often driving
resource information needs as opposed to adaptation. This is, sometimes, to the frustration of
individuals who would like to be working on adaptation and they’re tasked to focus on
mitigation of climate change. Uncertainty in climate models and uncertainty
in outcomes is a key road block to using climate information in NEPA documentation, there has
to be…these documents are setting the bounds, the fence line, as it were, for an action
that could be taken. When we have uncertainty, that makes it harder
to accept those boundaries. Uncertainty has always been a problem with respect to NEPA
documentation, but it’s becoming more of an issue with these climate change. Major decisions up on the top of the last
graph I showed, like Forest Plans, deal with multiple resources simultaneously. As a consequence,
they may have limited bandwidth for taking on single resource climate models, things
that deal with individual species. Now, of course, there’s exceptions to that.
Things like the sage grouse, which are hugely controversial now in the Western United States,
are certainly getting that sort of specieslevel attention even under these major decision
documents. Just generally hard to look at single resource models for integrative decisions. Matching the scale of the assessment and the
decision is difficult and is often presumed to be a barrier. Although, when we stop and
quiz people very carefully on this it’s hard to get them to say exactly how they would
make a different decision if they had a downscaled model to two centimeters, two kilometers,
or 20 kilometers. Everybody would like more finelytuned, downscaled
models, but it sometimes doesn’t drive decisions that much differently. I think that our climate
science friends would say, oftentimes, there’s a point at which that further downscaling
isn’t actually adding additional information. It’s just sort of smoothing over maps in the
way that we understand those landscapes to be present, and assuming that that same differential
will exist in the future. Then the last point, and I think this is a
very important one, maybe the most important one, is that knowledge coproduction has a
strong preference for information generation. When the resource managers can sit in the
same room with the research resources, they can define a problem and then generate information
and support for a decision that really works for that decision. Now, that’s a very expensive way for Climate
Science Centers and LCCs to provide information to resource managers. That’s why we want to
know something about the frequency of different kinds of decisions, because we ought to be
cautious about when we’re going to invest a lot into this knowledge coproduction and
when we’re not going to invest in that knowledge coproduction, in terms of helping resource
managers. We want to be doing the more important decisions. That is the end of my presentation, and so
I’m going to open it up and take questions. I think Ashley will probably take it off mute
or else she’ll field the questions. Looking over in the chat box, I see Joy Marburger
said, “NASA is another federal agency doing a lot of outreach on climate change,” and
she’s absolutely right about that. It’s a group that we did not survey well in our network
analysis. Ashley: All right. The next question is from
Beth and it said, “Can you comment on why NOAA figures are such an outlier in a network
diagram? Is it a function of methods geared towards DOI and Forest Service, or are there
other possibilities?” There’s another part of that question, but I’ll wait. Mark: Yeah, so I’m going to scroll back up
for that there. The question looks at the outer galaxy of NOAA way over to the lefthand
side. Drilling down into this a little bit more carefully, we could be looking at this
and figuring out why this is so. One hypothesis is that the network is exactly
how it looks. NOAA is providing information to people at the research institutions, like
at universities, who translate that information to resource managers. Alternatively, an explanation is that we didn’t
send a survey to anybody at NOAA. What we got was people who pointed at NOAA and said,
“That’s where we get some information.” When we’re looking at who people are pointing
at, those are by default going to have lower sample sizes. Things like the privates, the
locals, the state governments, they’re also outliers. They also had relatively small sample
sizes. Were we to focus on something that was a little
bit more…were we to have designed the survey to capture the NOAA people, we would have
a better idea of who NOAA is directly communicating with. If we had sent the survey to NOAA people,
for example, we would presume that they are, in fact, connected and we would learn who
they’re connected to. Instead, the only way we really sampled NOAA
here is by asking the people in our main focal areas who they’d been in touch with. OK, I
hope I answered that reasonably. I see that one up now and then I have one
up from Amber, thanks for listening to the talk. “I noticed at the beginning of the survey
results, you showed that a significant fraction of respondents didn’t believe that climate
change was happening and/or didn’t believe that it was caused by humans. Were they exempted
from the rest of the survey?” No, they were not exempted from the rest of
the survey. They were in the rest of the survey. Actually, I found this to be an interesting
result in a different way. I think it was probably two years ago, I was
listening to public radio and there was a story where people had surveyed people who
were working at national parks and state parks and things and found that, in terms of climate
change, that those people who work at parks are no different than the average American,
where something like 25 percent of individuals don’t believe that climate change is happening
and don’t believe that it’s humancaused. Actually, the fractions that we had were a
fair bit lower than that, so it suggests that these resource managers aren’t like the rest
of America. There’s more confidence that, in fact, climate change is happening and that
it’s largely humaninduced. Ashley: Then, we have Shannon with a question
over the phone. Shannon? Shannon: Great. Hi Mark. This is Shannon McNeely
from the North Central Climate Science Center. Thanks for your presentation, a lot of interesting
information in there. I just had a comment about a necessary caveat
about viewing this as a baseline for the North Central region. I know you know this but I
didn’t hear you make this comment, so I just wanted to make sure this got out there. It’s really only a partial baseline for the
North Central region because of coverage, first, geographic coverage. There’s a significant
portion of our North Central region that wasn’t in your survey data. The, basically, four states out of our seven
state region, there’s little to no data. Coverage in the Northern Plains and the Missouri Basin
region is limited compared to our western portion of our North Central region and compared
to the Southwest region, so that’s just an important thing to note. Another thing is that we increasingly have
been focusing a lot on tribes, tribal engagement through both the BIA and through tribal nations
and tribal lands themselves. I don’t believe you had any tribal respondents in your survey. That’s just another thing to think about,
in terms of the stakeholder or manager coverage. That’s all I had to say. Thank you. Mark: Yeah, OK. Shannon, thank you for both
of those comments. Those are good. What Shannon points out are true on both accounts,
and I think that that points to another one of these struggles or challenges that we have
with doing any of these kinds of surveys that are based through regional offices. These four agencies all have different regional
boundaries. As a consequence, finding a regional person and having them push it out to their
different regional staff, we found that there was a boundary overlap issue that comes up
again and again. We have respondents from Texas and that’s
because one of the agencies…Texas is within the Southwest region and the rest of them
are not. We sampled some people from Texas as well.
Yeah, we don’t have equally strong coverage from all of those different areas. There’s
two reasons why we have a reduced number of people surveyed from those other states from
the North Central region, as Shannon pointed out. One is that we didn’t do a good job of sampling
it, but the other is that there’s considerably less federal land in those states as well,
and so there’s a smaller footprint out there. There was a question that came up that’s related
to this on the…I should also point out that we struggled with the tribes question here
in the Southwest as well. For our region, we have this bimodal problem
that we have in Arizona and New Mexico, some of the largest, most wellorganized, powerful
tribes in the country…and relatively a small number.
I think we’re represented by like 114 tribes in the Southwest region, and 109 of them are
in California. They’re generally all very small, and considerably less synthetically
organized, so they’re a very difficult group to get to and try and survey systematically.
Feeling that we couldn’t do a good job of it, we ended up not doing a job of it at all
on this particular project. Somebody here asked about the Forest Service,
and I’ll say that…”Do you have a sense of the reluctance of the Forest Service to participate
in some of the questions?” The Forest Service had us go through their
Washington office and they were very concerned about the potential sensitivity of Forest
Service employees having to reveal their preferences or beliefs about climate change to a survey.
They felt that people would be less interested in answering the survey and it wasn’t information
that was appropriate for us to be collecting on their employees, so we didn’t. I can continue reading on these questions
that are coming up on the chat box, these are great. One of them is, “Did you think
about mapping risk perception on drought or fire risk models, or one of the other higher
risks? The climate velocity seems so focused on species distribution, which is just one
aspect, right?” Absolutely correct. We have thought about
it. I guess we haven’t done it yet. One of the things that we’ve talked a lot about in
the Southwest is that a drought risk map…we’re not sure what a drought risk map looks like,
and there are likely to be many drought risk maps. The drought risk with respect to reservoir
water and filling of stream flows, and there’s other drought risks that might be associated
with snow packs, snow melt and fire season. They may be relatively correlated, but they
all carry some sort of bias in them. This is something that the Southwest Climate Science
Center has spent a lot of time talking about, and we haven’t come to a resolution on this. There are maps that are published on things
like fire risk and we could be looking at those attitudes associated with fire risk,
and we’ll be doing that in the future. Let’s see. The questions are coming in pretty quickly,
so it’s hard to flip down. “Could you please flip back to one of your
map slides that shows where the respondents were located?” Sure, I’ll do that. There’s
a map of the participants and the contacts that were contacted from those participants. The dot sizes represent the number of people.
This map here shows all of the units for which there was federal responsibility that people
answered questions from. Also, showing that we sampled some people from the state of Washington
and Oregon even though they weren’t in our region. OK. Next question. “Did you investigate tradeoffs
among natural resource management goals for each agency? For example, did managers indicate
a preference or weighting among competing goals, cost versus biological resource state,
and how these tradeoffs may change with increasing knowledge or climate change effects?” That’s
a great question. No, we didn’t do that. We are trying to be mindful of the fact that
all of our regional resource managers said that what they are worried about is survey
fatigue among resource managers and they wanted us to keep the survey questions brief. We
tried to keep them brief, so that people would stay engaged with the survey, all the way
through to the network component of it. As I mentioned, there was a limited number of
questions. But that would be a really good set of questions
to ask. We get at that a little bit in our subsequent survey that we’re doing, based
on our consultative groups, is thinking about competing goals for management. Not expressing
it in exactly that way, but we’re trying to get at prioritization of the different resource
objectives. Mary O’Brien asks, “Do you have a sense of
which resource decisions have integrated climate science information?” Again, that’s not a
question that was asked. On the survey, we didn’t ask people to give
examples of where they’ve used information for good effect. That has been something that
we’ve been doing, with respect to the consultative groups and is done in the content analysis.
We have not yet finished the content analysis to be able to answer that question. Rachel asks, “Did you investigate tradeoffs
among natural resource management goals for each agency? For example…” Yeah, that’s
a lot like…Oh, no. That is the same question from above. That just repeated on the thing.
I’ll leave that one as it is. With that, I have made it to the bottom of
the question list. I think we’re coming up on the end of the hour. To be respectful of
people’s time, I think I will hand it back to Ashley. Ashley: Thanks, Mark. We did have one more
come in, from Amber. Mark: Sure. Ashley: Do you see that one? Mark: I’m sorry. There it is. “You mention
that there have been 46 prior studies on social networks and climate change. What did you
find out in your literature review about how climate changerelated social networks tend
to operate? Did your results from this study of Southwest land managers confirm or conflict
with these prior studies?” Boy, [laughs] that’s going to be a tough one to answer. How networks tend to operate…Basically,
on the literature of social networks and climate change, it’s a pretty broad literature across
a variety of different things, having to do with human health and all sorts of different
things. My understanding of this literature is that
these networks are probably better predicted by the kinds of problems that they’re addressing
things like whether it’s human infrastructure, transportation or human health, or natural
resources than they are by the fact that it’s climate change, in particular. That’s not a very good answer. I’m sorry.
That’s the best I can do at the moment. Ashley: Thank you very much for the webinar,
Mark.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top