Climate is a Stronger Driver of Tree and Forest Growth Rates than Soil and Disturbance

Water, nutrients, and light availability are vital to a plant’s growth. Since these resources vary spatially and temporally within tropical rainforests, so does tree growth. Previous studies have found that tree growth increases with rainfall and decreases with drought, and tree growth increases with nutrient-rich soils. Additionally, anthropogenic and natural disturbances can initiate forest growth through the creation of canopy gaps that open up space, increase light, nutrient, and water availability, and therefore accelerate the growth of previously repressed individual trees. Although many scientific investigations have researched the effects of either climate or soil on growth rate, there are few that have simultaneously considered the effects of both environmental factors, along with disturbance, on tree growth rate. To assess the variability of tree growth rates across lowland Bolivia, and to determine the effects of climate, soil, and disturbance on growth variables, Toledo et al. 2011 collected data from 165 1-ha permanent sample plots within the Bolivian forest network. Multiple linear regression analyses demonstrated that climate variables, such as water availability, were the strongest drivers of tree growth. More rainfall, a shorter and less intense dry period, and higher temperatures led to higher tree growth. Tree growth slightly increased with soil fertility. Basal area growth was largest at intermediate soil fertility levels. Interestingly, growth did not show any relationship with total soil nitrogen or plant available soil phosphorus.  Growth rates also increased in logged plots just after logging, but this effect disappeared after 6 years. These results suggest that climate change may have a large impact on forest productivity and carbon sequestration. However, the negative impact of decreased rainfall may be offset by the positive influence of increased temperature.—Megan Smith
Toledo, M., Poorter, L., Pena-Claros, M., Alarcon, A., Balcazar, J., Leano, C., Licona, J.C., Llanque, O., Vroomans, V., Zuidema, P., Bongers, F. 2011. Climate is a Stronger Driver of Tree and Forest Growth Rates than Soil Disturbance. Journal of Ecology 99: 254–264. DOI: 10.1111/j.1365-2745.2010.01741.x

Lowland Bolivia is characterized by large differences in geomorphology and geological history. Young and nutrient-rich soils predominate in the west part of country while ancient rocks and nutrient-poor soils predominate in the east part of Bolivia. There are two rainfall gradients within the country, a south–north rainfall gradient where rainfall increases towards the equator with mean annual precipitation ranging from 1100 to 1900 mm, and an east–west gradient where mean rainfall increases from 1600 to 2200 mm. Precipitation in individual years can vary from 600 to 3000 mm per year from the driest to the wettest areas. The lowlands in Bolivia experience a 4–7 month dry season from April to September. Mean annual temperature is between 24.2 and 26.4°C.
In many tropical countries, forest management for timber harvesting is an important economic activity. In Bolivia, the current Forestry Law provides a strong incentive for sustainable forest management. The law requires the establishment and monitoring of a network of permanent sample plots in the lowland forestry areas. Plots cover different forest types, from humid evergreen Amazon forests to dry deciduous Chiquitano forests.
For this study, 165 1-ha plots were selected from the Network of Permanent Plots in lowland Bolivia. These plots were established in old-growth forests on flat terrain and in an altitude range from 100 to 500 m asl. The plots were distributed over the main environmental gradients of climate and soil, and 52% of them were affected by logging. Measurement periods varied between 2 and 11 years, with the last measurements taking place in 2007. Plots were typically square, with only 11 of them being rectangular. In each plot, every tree > 10 cm diameter at breast height (DBH) was measured with diameter tape, painted at the measurement point, and tagged and identified. Re-censuses were carried out in the same season or month as the plots were established to minimize the effect of intra-annual variation in DBH change. In most of the 85 plots that were affected by logging, logging started immediately after their establishment. Eighty plots were not logged.
The annual diameter growth per individual was calculated as: (Df – Di)/t where Df is the final diameter and Di is the initial diameter at the start of the interval. Based on this diameter growth rate (DGR), the authors calculated five growth variables per plot, representing the growth rate at the individual level: average (DGRavg), median (DGR50), 90th (DGR90), 95th(DGR95), and 99th percentile (DGR99) of annual growth diameter. Values for the 90th and 95th diameter growth rate percentile are not included in the results because they were highly correlated with DGRavg, DGR99 and between themselves. The DGR50 and DGR99 were calculated to provide information on median and upper levels of growth rate. The authors also calculated the basal area growth rate at the stand level (BAGRstand) as the net yearly basal area change per plot. The BAGRstand was calculated as: (BAf – BAi)/t, where BAf is the final total plot basal area and BAi is the plot basal area at the start of the measurement interval. t is time, in years, between both measurement dates. BAGRstand includes the effects of growth, recruitment and mortality, while DGR is based upon individuals that survived the whole monitoring period.
20 soil samples were collected within each plot from the first 30 cm of soil. A pooled sample of 500 g was analyzed within a week after collection. The analyses included 12 edaphic variables: percentage of clay, silt, and sand, exchangeable Ca, Mg, Na, K, cation exchange capacity, acidity, plant available phosphorus, organic matter, and total nitrogen. For each plot the authors obtained five climatic variables that were interpolated from available data from 45 weather stations in the region.
The authors performed two independent Principal Component Analyses (PCA) to summarize the climatic and soil variables. The PCA was done for 220 1-ha plots that are a part of the Network of Permanent Plots in lowland Bolivia and included the 165 plots that were analyzed in the study. The climatic PCA considered annual temperature, annual precipitation, precipitation of the three driest months, length of the dry period (number of months < 100 mm), and length of the drought period (# months < 50 mm). The first axis (65%) correlated positively with annual precipitation and negatively with dry period length. The second axis (29%) correlated positively with mean annual temperature and negatively with the precipitation of the driest months. The edaphic PCA considered the 12 edaphic variables. The first two axes of the edaphic PCA explained 68% of the variation. The first axis (48%) correlated positively with variables related with soil fertility and negatively with acidity. The second axis (20%) represented variation in soil texture and correlated positively with clay and silt and negatively with sand.
Four logging-related variables were used to describe forest disturbance in each plot. Logging Presence (LP) was based on whether logging occurred (1) or not (0) in the plots and Logging Impact (LI) was described as whether the impact was high (1) or low (0) in logged plots (based on the number and location of logged trees and number of additional trees that died due to logging operations). Other variables representing logging disturbance were the Logged Basal Area (LBA, in m3ha-1, based upon the number and diameter of the trees logged) and the Time After Logging (TAL in years).
The four growth variables were correlated with the individual environmental variables to evaluate what components of these composite axes were most important. Each of the four growth rate variables was regressed on the four main environmental axes and the four disturbance variables were analyzed using a series of multiple backwards regressions. The authors used PCA axes for this regression analysis to avoid problems with multicollinearity.
Growth variables demonstrated high variation across plots, with the largest variation in DGR50 and BAGRstand. On average, BAGRstand was 0.49 m2 ha-1 year-1. Mean DGRavg was 0.31 cm year-1, with the lowest value in a plot with low rainfall and the highest value in a plot with intermediate rainfall. In regards to the median and upper growth rate limits (DGR50 and DGR99), highest values were found in plots of higher rainfall and lowest values were found in plots of lower rainfall. Although low DGRavg was found mostly in plots with lower rainfall, low BAGRstand was found in plots with both low and high amount of rainfall. A table displaying the standard deviation and ranges of tree and forest growth variables was constructed, as was a figure displaying the variation in average diameter (DGRavg) and stand basal area (BAGRstand).
The four growth variables had similar relationships to the environmental axes and variables, except for the texture axis, drought period, P, acidity, and LBA. Most of the significant relationships were found for DGRavg, DGR50, and BAGRstand. Growth variables were always positively and significantly correlated with climate axes and, in most of the cases, negatively and non-significantly to the soil axes. Growth variables increased significantly with annual precipitation and decreased with the dry period. Soil variables generally had negative relationships to growth variables, but OM content was the only variable that was consistently significant. All disturbance variables were positively related to the four growth variables except TAL. A table displaying the Pearson correlation coefficients of four tree growth variables with four environmental axes, four disturbance, and 18 environmental variables from the 165 1-ha plots was constructed.
The backward regression models demonstrated the relative importance, and explained the effects of, environmental axes and disturbance variables on growth rates. Rainfall was the most important axis, affecting all growth rates significantly and positively. Similarly, temperature and soil fertility had positive effects, although sometimes with a plateau, while BAGRstanddeclined at higher soil fertility levels. The soil texture axis had significant negative effects only on growth rates at tree level. Disturbance variables ‘logging’ and ‘logging intensity’ had a significant positive effect on DGRavg and DGR50 whereas time after logging (TAL) had a significant negative effect. A figure displaying the relationships between average tree diameter growth rate (DGRavg) and stand basal area growth rate (BAGRstand) with environmental axes: rainfall, temperature, soil fertility, and time after logging were constructed. A table displaying the backward multiple regression analysis of forest and tree growth variables on environmental and disturbance variables was also constructed.
            Overall the authors found high variation in tree growth rates at individual and at stand level in lowland Bolivia. The DGRavg of 0.31 + 0.10 cm year-1 and its range (0.12–0.70 cm year-1) is within the range (0.08 – 0.80 cm year-1) of diameter growth rates reported for other tropical forests. The average of BAGRstand (0.49 + 0.21 m2 ha-1year-1) was similar, but the range (0.17–1.22 m2 ha-1year-1) was larger than for other tropical forests.
In general, growth rates increased with water availability. The lowest annual growth rate at tree level occurred at the drier end of the rainfall gradient. Tropical dry forests are likely to have lower annual growth rates than moist forests due to their shorter growing period accompanied by lower rainfall. Other studies have shown that lower annual rainfall, or a more intense drought period, decreases growth rates. However, while basal area growth was positively correlated with rainfall, some moister forests were found to have lower growth, which can possibly be attributed to lower stem density or to a high abundance of slow-growing, drought-adapted species. The authors also found significant positive effects of temperature on all growth variables, which is surprising. Plants becoming closer to their photosynthetic optimum could cause increased growth. Moreover, due to the forests’ geographical location, the trees often experienced cold fronts that caused reduced photosynthetic activity and chilling injury. These results are significant when considering climate change predictions. The negative effects of predicted increased rainfall seasonality may be partially offset by the positive effects of temperature on tree growth for this study region.
Non-significant or even weak negative correlations were found between growth rates and individual and composite soil variables. The edaphic traits may have been so weak in comparison to the climatic variables because the soils were sampled at a small scale at one point in time, whereas climate was based on a long-term average and averaged out over large spatial areas. The authors also believed that the weak edaphic effects were more likely due to the confounding effects of water availability in their study. Some plots in high rainfall areas were characterized by higher growth rates and highly weathered and nutrient poor soils. The multiple regression analysis removed this effect and found that tree growth did actually increase with soil fertility, while stand basal area growth was optimal at intermediate levels of soil fertility. Furthermore, most lowland forest soils have large amounts of N but small amounts of available P, so scientists assume that P limits plant growth. However, the authors found weak relationships between growth, N and P. Plants in tropical forests may obtain nutrients directly from litter fall rather than after they enter the soils, from the atmosphere, or from mychorrhizal fungi that obtain nutrients from litter and soils.
Growth rates increased in logged plots, especially those that had a high logging impact, and decreased with time after logging. The opening up of the canopy could enhance light availability, photosynthetic carbon gain, and tree growth.  Logging removes larger trees, increasing canopy openness with associated changes in micro-environmental conditions that affect forest growth rates. The authors’ results showed that logged plots had significantly higher DGRavg. Basal area growth also increased with time. The amount of logged basal area did not affect BAGRstand probably due to the small variation in logging intensity among the study plots. Conventional logging intensity in Bolivia is usually low (1–3 trees per ha). Logging affected mainly DGRavg and DGR50, which emphasized the effects on the small and suppressed trees that benefit from canopy opening by logging. DGR99 was not affected by logging because these fast-growing individuals receive high light conditions and are already in the forest canopy. Time after logging had negative effects on growth rate at tree level. Diameter growth rate peaked after 2 years and returned to original levels 6 years after logging, most likely because the forest canopy closed again.
In conclusion, the results from this study revealed that both environment and disturbance explained growth rate variation in Bolivian lowland forests. This variation was most strongly influenced by climate and water availability. Soil fertility and soil texture did not show strong effects, while growth rates increased with logging-related disturbances. Climate change scenarios for the tropics predict a future decrease in rainfall and an increase in temperature. The positive effects of higher temperature in these Bolivian forests may then offset the negative effects of increased seasonality on tree growth. 

Climate Change Could Alter the Distributions of Mountain Pine Beetle Outbreaks in Western Canada

The mountain pine beetle Dendroctonus ponderosae is an eruptive, tree-killing bark beetle that causes large-scale pine tree mortality in the forests of North America during epidemic conditions. Beetles attack en masse and, along with vectored fungi, overwhelm pine trees’ defensive capacity. Usually, extremely cold winter temperatures limit the spread of mountain pine beetles to northern latitudes by causing widespread beetle mortality. However, climate change may transform previously unsuitable habitats to suitable ones as temperature increases lead to milder winters and warmer summers. Despite the importance of extreme cold temperatures in limiting eruptive beetle populations, few landscape level models have examined how cold temperature regimes and future climatic variability will limit or enhance beetle outbreaks. Using a spatiotemporal statistical model framework, Sambaraju et al. (2011) examined the relationship between elevation and temperature and the occurrence of mountain pine beetle outbreaks. Additionally, they investigated the spatial outbreak patterns of the beetle under a combination of four simulated climate change and two climatic variability scenarios using adapted terms from the model and data from a peak outbreak year. The authors found that timing, frequency, and duration of cold snaps had a severe negative relationship with occurrence of an outbreak in a given area. Drops in temperature and extreme winter minimum temperatures reduced outbreak probability. Increases in mean temperature by 1°C to 4°C increased the risk of outbreaks, with the effects manifesting first at higher elevations, then at increasing latitudes. Increasing the variance associated with mean temperature did not change the trend in outbreak potential. These results demonstrate how climate change could result in a higher frequency of mountain pine beetle outbreaks, ultimately causing higher rates of pine forest mortality.—Megan Smith
Sambaraju, K.R., Carroll, A. L. Zhu, J., Stahl, K., Moore, R.D., Aukema, B.H. 2011. Climate Change Could Alter the Distribution of Mountain Pine Beetle Outbreaks in Western Canada. Ecography 34: 211–213. DOI: 10.1111/j.1600-0587.2011.06847.x

              Mountain pine beetle range expands from northern Mexico through the Pacific Northwest to northwestern Alberta in Canada. Beetle communities establish at elevations of 1500–2600 m above sea level. In Canada, the lodgepole pine Pinus contorta var. latifolia, is the predominant host; however other species such as ponderosa pine P. ponderosa, western white pine P. monticola, and jack pine P. banksiana, are also attacked. The beetle is univoltine over most of its range, but its life cycle can extend to two years (semivoltinism) at elevations greater than 2600 m due to sub-optimal temperatures for insect growth. Adults emerge from brood trees in mid-July and mid-August, and then attack trees en masse. A high beetle attack rate, along with vectored fungi, causes tree mortality. There are four instars in the beetle life cycle, and the third and fourth instars are commonly exposed to the coldest weather between December and February. The beetles are capable of surviving winter temperatures in the range of –34°C to –37°C. However, frequent and/or prolonged occurrences of severe winter temperatures can cause widespread beetle mortality by killing off a large portion of the immature beetles. For example, cold temperature extremes played a key role in the collapse of a mountain pine beetle outbreak in the Chilcotin area of central British Columbia during 1984–1985.
              This study investigated the association of temperature and elevation with the occurrence of mountain pine beetle outbreaks in western Canada from 1992 to 2007. It also examined the spatial outbreak patterns of the beetle under four simulated climate change and two climatic variability scenarios.
              The authors generated a grid covering the Canadian provinces of British Columbia, Alberta, Saskatchewan, and portions of the territories of the Yukon, and Northwest territories in ArcMap. This grid was split into cells that measured 0.1° latitude X 0.2° longitude. These cells were approximately 12 X 12 km in size and 17,063 were created.
              Then, the authors obtained aerial survey data sets of red attack (trees that had been killed by mountain pine beetle in the previous year) for the province of British Columbia from Forest Insect and Disease Survey (FIDS) of the Canadian Forest Service for 1990–1996 and from British Columbia Ministry of Forestry and Range for 1999–2007. The locations and extent of tree mortality had been mapped on a 1:250,000 NTS map, and the maps were digitized into shapefiles using ArcGIS. The yearly infestation shapefiles were overlaid on the study grid for each year. Sambaraju et al. used a binary variable (presence = 1, absence = 0) of a tree-killing population of mountain pine beetle per cell, as a response variable in their study. A map displaying the study grid was constructed.
              Daily minimum and daily maximum temperatures were derived for each grid cell by interpolation from station observations from 1990 to 2006. The database of observations included data from 1974 stations assembled from the climate station network of Environment Canada and the networks of fire weather stations maintained by the Forest Service in each province. Elevation distributions for each grid cell were extracted from the higher resolution DEM Can3d30. Due to the complex topography (large elevation ranges within grid cells) temperatures were interpolated to the center of each grid cell using the 10thpercentile elevation.
              Sambaraju et al. used logistic regression to analyze the relationship between the explanatory variables and the occurrence of mountain pine beetle outbreaks in each cell using PROC LOGISTIC (SAS 2008). Cold temperature variables included cold snaps (a minimum of four continuous days of average winter temperatures at or below –20°C), sudden temperature variations, and extreme minimum temperatures during winter. Other temperature and degree-day terms were extracted from previous models. Analysis was limited to neighboring spatial structures and occurrences of infestations in previous years for a given cell. Each spatial and temporal variable was fit in combination with the others to find the best spatial-temporal dependence structure. Models were selected based on the Akaike information criterion (AIC) score, with a lower AIC value indicating a better fit. The influence of individual temperature terms on outbreak probability was then studied by including a single variable along with the best fitting combination of spatial and temporal terms. All the covariates were included in a multivariable logistic regression along with the best fitting spatial and temporal terms. A model with the best subset of variables was selected by a backward selection method. A table displaying temperature-related terms, spatial neighborhood structures, and temporal infestation terms used in the spatial-temporal logistic regression models was constructed.
              The authors selected four climate change scenarios of temperature increases of 0°C, 1°C, 2°C, and 4°C. To assess whether variability in the climate change scenarios influenced outbreak trend, they also selected two scenarios where the variance associated with a given mean temperature increase was 1°C or 2°C. These simulations were conducted with outbreak data from 2005. Climate change was simulated by adding non-overlapping streams of random numbers from a Gaussian distribution of known mean and variance to the daily maximum and minimum temperature data for years 2003 and 2004 for the entire study area. This altered temperature data set was used to redefine the temperature covariates that were then run through the multivariable spatial-temporal logistic model. The logits were back-transformed to generate probabilities of outbreak in a given cell.
              Results were analyzed using two methods. The first grouped cells into five classes from very low to very high outbreak probabilities. The mean numbers of cells in each outbreak class for one hundred simulations for each combination of mean temperature increase and variability scenario were calculated. Regression analyses were performed. The second approach calculated the median probability for each cell across one hundred simulations for the four mean temperature increase scenarios. Median probabilities were also grouped under the five outbreak probability classes. Then the authors evaluated changes in the outbreak class for each cell under 1°C, 2°C, and 4°C compared to the 0°C mean temperature increase scenario. Cells were mapped if a change in outbreak class occurred to identify their locations in terms of whether changes occurred at elevations 1500 m or at latitudes further north of the current beetle range. The authors graphed the number of cells and percentage of total cells per temperature increase scenario whose outbreak risk increased or decreased based on elevation.
              Sambaraju et al. found that the likelihood of finding an outbreaking population of mountain pine beetle killing trees in a given area increased with occurrences of patches of dead trees in the same area in the past three years, as well as in the surrounding 18–22 km in the same year. All spatial and temporal covariates had a positive association with outbreak likelihoods.
              Cold snaps or periods of four consecutive days with winter average temperatures below –20°C decreased the outbreak probability with a cell. Occurrence of early cold snaps in October and November had the largest negative impact on beetle populations, more so than cold temperature episodes occurring in the spring. However, temperatures did not have to drop below –20°C for cold temperatures to have a negative influence on beetle populations. Increasing frequencies of sudden drops in average temperatures of >10°C during consecutive days in winter negatively impacted beetle populations. Increases in temperature or day-to-day declines in temperatures of 0–5°C did not decrease the outbreak probability. Extreme minimum temperatures of –30°C or below decreased the odds of an outbreak response within a cell. A table displaying the maximum likelihood estimates of cold temperature coefficients in spatial-temporal logistic regression models was constructed.
              Outbreak probabilities increased with higher maximum, minimum and mean annual temperatures, average summer temperatures, and with sufficient accumulation of degree-days for 50% egg hatch and adult emergence. Outbreak probabilities also decreased at the lowest and highest elevations across the study area relative to the mean elevation.
              A total of 21 variables were included in the final model and most maintained the same sign as individual analyses. All cold-temperature terms selected by the model reduced the probability of mountain pine beetle outbreaks. Increases in maximum temperature, mean temperature, and annual temperature change led to an increased probability of outbreaks. Furthermore, successful accumulation of degree-days for a univoltine life cycle for mountain pine beetle strongly increases the probability of an outbreak. Graphs displaying the odds ratio estimates for cold temperature covariates were constructed.
              The outbreak trends under the different climate change and climate variability scenarios demonstrate that increase in average temperatures by 1°C or more caused a decrease in the mean number of cells where the probability of an outbreak was very low. An increase in temperatures by 2°C and 4°C increased the mean number of cells in the ‘medium’ risk class. The highest outbreak potential occurred with mean increases of 2°C, but decreased above and below this value. Increasing the variability associated with a mean temperature did not cause large differences in the mean number of outbreak cells under the different probability classes.
              Similar results were observed after examining the median outbreak probability per cell. Increased temperatures increased the risk of outbreaks, especially at higher elevations. At an increase of 1°C, a significant percentage of the higher risk cells occurred at the periphery of the existing outbreak in areas of higher elevation. Similar results were seen with a mean increase of 2°C and 4°C. However, increasing outbreak probability was manifested at more northern latitudes in addition to higher elevations at 4°C. With an increase of 4°C over a year, there was a 15% increase in outbreak potential at 120.8°W/58°N, which is 25 km north of the current range in British Columbia. Increasing mean temperatures caused increases in the numbers of higher risk cells. The percentage of reduced risk cells occurring at elevations 1500 m was higher for 1°C. Graphs displaying the mean outbreak potential in response to simulated mean increases in temperature were constructed, as were maps displaying elevational and latitudinal changes in median outbreak probabilities. Finally, graphs displaying the number of cells, and percentages of total cells per temperature increase scenario, showing increased or decreased risk of mountain pine beetle outbreak under climate change scenarios were constructed. 
              Overall, these results demonstrate that temperature effects on the tree-killing behavior of mountain pine beetle are manifested through winter weather patterns. Extreme cold temperatures and sudden temperature drops limit insect populations. However, climate change may transform previously unsuitable habitat into temperature regimes conducive to beetle population success. For example, forested regions at high elevations and latitudes further north of the current range of the mountain pine beetle in British Columbia could become outbreak-prone due to climate change. Small increases in temperature also resulted in occurrences of new areas of outbreaks first at higher elevations, and then at northern latitudes. Yet, if temperatures reach a specific maximum threshold, hotter habitats could also decrease outbreak probabilities. 

Increased Risk of Pitch Canker to Australasia Under Climate Change

Fusarium circinatum is the causal agent of pitch canker, a disease that infects Pinus species. It is considered one of New Zealand and Australia’s most unwanted pathogens because of the high level of threat it poses to the region’s forest industry. In absence of host-associated insects, the severity of pitch canker is influenced by climatic variables. It is currently problematic in humid subtropical and Mediterranean climates. Therefore, climate change may expand the pathogen’s possible range in Australasia, causing large-scale infection in the countries’ pine forests if it is ever introduced in these regions. Ganley et al. (2011) used the process-orientated niche-modeling program CLIMEX to model the climate suitability for pitch canker establishment in Australasia using a current climate dataset and three different global climate models (GCMs) under moderate and high CO2 emissions scenarios. Under Australasia’s current climate, the moist periphery of Australia and the northern coastal areas of New Zealand are climatically suitable for pitch canker. Under both climate change scenarios, the potential range of pitch canker would expand southwards to include most of the forests in the central north island of New Zealand and the northern areas of Tasmania. Currently, these areas are unsuitable because they are too cold to support the pathogen. Therefore, these results emphasize the need to prevent the introduction of pitch canker to Australasia as the climate begins to change and warm within these regions.—Megan Smith
Ganley, R.J., Watt, M.S., Kriticos, D.J., Hopkins, A.J.M., Manning, L.K. 2011. Increased Risk of Pitch Canker to Australasia under Climate Change. Australasian Plant Pathology. 40: 228 – 237. DOI 10.1007/s13313-011-0033-2.

Fusarium circinatum is a wound pathogen. Intact tree tissue is not susceptible to the disease. Therefore, it is primarily associated with wounds created by insects, weather, or mechanical damage. In absence of host-associated insects, the incidence of pitch canker is influenced by climatic variables. The disease is very problematic in humid subtropical and Mediterranean climates, and outbreaks of pitch canker have been previously correlated with severe damage caused by hurricanes. With climate change scenarios projecting an increase in global temperatures, the disease’s range may expand.
Fusarium circinatum is pathogenic to over 60 species of pine and Douglas fir. Pinus radiata is one of the most susceptible species, which places New Zealand’s and Australia’s forest plantations at great risk to the pathogen because this particular tree species accounts for approximately 90% of New Zealand’s pine plantation estates and 50% of Australia’s pine plantation estates. Given the threat this disease poses to New Zealand and Australia’s current and future pine plantations, it is of great use to assess how climate change will influence the distribution of the pathogen.
The authors used a previously developed CLIMEX model for pitch canker to estimate its potential distribution within New Zealand and Australia under current and future climate. CLIMEX integrates modeled weekly responses of a population to climate to create a series of weekly and annual indices. The CLIMEX Compare Locations module uses an annual growth index (GIA) to describe the potential for population growth as a function of soil moisture and temperature during favorable conditions and up to eight stress indices to simulate the ability of the population to survive unfavorable conditions. It also uses a mechanism that defines the minimum amount of thermal accumulation during the growing season that is necessary for population persistence.
The growth and stress indices are calculated weekly and combined into an overall annual index of climatic suitability, the Ecoclimatic Index (EI), that gives an overall measure of the potential of a given location to support a permanent population of the species. It ranges from 0 for locations at which the species is not able to persist to a theoretical maximum of 100 for locations that are climatically perfect for a species. In this study, EI is classified into unsuitable (EI = 0), marginal (EI = 1–5), suitable (EI = 6– 20), and optimal (EI>20) categories for pitch canker establishment. CLIMEX models are fitted to known distribution data. This process involves adjusting growth and stress parameters and then comparing the model results to the known distribution of the species and including any additional information about the pathogen.
Records of pitch canker were compiled from individual point locations and county state and island level observations for countries known to have pitch canker. The records reveal that the disease’s range is primarily humid subtropical and Mediterranean. The disease also spreads into warmer temperate climates and regions with tropical humid rainforest and savannah climates.
A CLIMEX model, constructed using the Compare Locations Module, was utilized to model the current and future distribution of pitch canker. The fit of the model was validated using a set of observed occurrences from locations not used in the original fitted dataset. The results show that almost all the observations of pitch canker occurred in suitable areas for the species.
The current climate dataset used within CLIMEX was a 0.5° of arc dataset. It was generated from the 1961–1990 climate normals provided by the climatic research unit. Six climate change scenarios were used to project the potential distribution of pitch canker under climate change within New Zealand and Australia. These scenarios were developed from three GCMs using two Intergovernmental Panel on Climate Change (IPCC) scenarios, representing medium (A1B) and high (A2) emissions. Selected GCMs included CSIRO Mark 3.0, NCAR-CCSM, and MIROC-H. Data from these three GCMs were pattern scaled to develop individual climate change scenarios for 2080 relative to the base climatology. The climate variables utilized by CLIMEX were taken or calculated form the GCM data.
Areas of New Zealand and Australia with a suitable climate for pitch canker under current and future climate scenarios were estimated with Regional Council administrative areas and state/territory boundaries using ArcGIS. Boundary shapefiles were used to extract EI data by region and state. The areas for each EI value were calculated to obtain the total area per EI value by region and data. The data were classified using suitability values and the total area per suitability classification was determined for current pine and softwood plantations under the current and future climate change scenarios.  Current New Zealand pine plantations were identified within the Land Cover Database layer using ArcGIS. A current Australian softwood plantation shapefile was used to delineate softwood plantation boundaries.
            New Zealand pine plantations are located throughout the country. Most are dispersed in the central North Island, Northland, Gisborne, and the northern South Island. The softwood resource within Australia is located in the southeastern states of South Australia, New South Wales, Victoria, and Tasmania. A map displaying the location of existing Pinus species plantations in New Zealand and Australia was constructed, as was a map displaying the Ecoclimatic Index class for pitch canker under current climate.          The authors determined that under the current climate, the potential distribution of pitch canker in New Zealand included Northland and coastal areas within North Island. The percentage of area within each region projected to be suitable for pitch canker decreased as latitude increased, ranging from 100% in Northland to 0% in South Island regions. The potential distribution only included 43% of current New Zealand plantations. Yet, all of these plantations were projected to have a climate that was optimal for pitch canker. Most of the forests in the central North Island were projected to have an unsuitable climate for pitch canker under current climatic conditions.
            In Australia, the potential pitch canker distributed was limited to the moist periphery of the country in Queensland, New South Wales, Victoria, South Australia and Western Australia. The proportion of each state that was suitable for pitch canker increased with latitude in mainland Australia. Suitable areas closely corresponded to current plantation area. Only 14% of the softwood plantations were projected to be unsuitable for pitch canker. A table displaying the percentage distribution of the softwood plantation area, by EI class for pitch canker, under current and future climate scenarios was constructed.
            Potentially suitable areas in New Zealand increased under all climate change scenarios from 8% for the CSIRO A1B model to 34% for the NCAR A2 model. The future potential distributions and the two emission scenarios demonstrated little variation between the three GCMs. Marked increases in potentially suitable areas occurred in most regions, but were very noticeable in the southern north island regions of Wellington, Manawatu-Wanganui, and Taranaki. There was an increase in EI for the majority of the regions originally designated to be suitable for pitch canker. The area projected to be unsuitable in New Zealand plantations decreased from 57% to 17–21%. Area projected to be optimal for the pathogen increased from 43% to 76% and 78% under climate change scenarios. A graph displaying suitable area for pitch canker, expressed as percent total area and suitable projected area, was displayed for New Zealand and Australia.
            In Australia, the area suitable for pitch canker was reduced for all scenarios. The reductions ranged from 23% for the NCAR A1B model to 51% for the CSIRO A2 model. Reductions were noticeable in Queensland, Western Australia, South Australia, New South Wales, and Victoria. However, climate change resulted in increases in suitable area within Tasmania under all scenarios. The effect of climate change on the area of plantations susceptible to pitch canker was sensitive to the GCM used, but there was little variation between the two emissions scenario used. Areas that were unsuitable showed moderate and substantial increases in suitability under the MIROC and CSIRO models, but showed no change under the NCAR model. Reductions in suitable to optimal area were projected by CSIRO, while reductions in marginal areas were predicted by MIROC. Maps displaying the EI classes for pitch canker under future climate scenarios for each GCM model for New Zealand and Australia were constructed. Graphs displaying the mean EI of suitable areas, by region for Australia and New Zealand, were also constructed.
            In summary, under climate change, pitch canker is expected to pose a greater threat to New Zealand than to mainland Australia as the disease’s preferred climate (wet warm temperate to sub-tropical climates) shifts southward.
            Although the CLIMEX model projects the climatically suitability patterns for pitch canker, it doesn’t indicate the potential severity of the disease. Previous studies suggest that coastal regions or areas with high humidity or subtropical climates would have more severe outbreaks. Moreover, as the frequency of strong winds and extreme rainfall increases with climate change, these climatic environments would provide moist conditions suitable for pitch canker suggestion.
Areas with marginal to suitable climate would be unlikely to have outbreaks of the disease in absence of host-associated insects, while areas that are climatically optimal for pitch canker would be likely to have disease outbreaks regardless of whether specific host insects or wounding agents were present. Therefore, if pitch canker is introduced to New Zealand, the entire North Island and northern coastal regions of the South Island could support outbreaks under future scenarios. Yet, the disease may not develop in the coastal Canterbury region as long as host insects and wounding agents remain absent. If these agents were present in unsuitable areas, the disease would not develop.
New Zealand does not have insects known to be associated with pitch canker disease in the USA. Australia’s bark beetle Ips grandicollis is not currently present in Tasmania, but it could possibly become associated with pitch canker outbreaks under climate change scenarios. Furthermore, other insects present in Australia could become vectors for the pathogen, and it could also be furthered by marsupials, livestock, and birds. Regardless of the role of vectors, the pathogen would still establish in the coastal regions that are climatically optimal for pitch canker.
Pinus radiata is the primary plantation species planted in Australasia and is known to be highly susceptible to pitch canker. However, large plantations of P. pinaster exist in low-medium temperate rainfall zones, and P. elliotii and P. caribaea are also planted in tropical and subtropical regions of Austrialia. Pinus caribaea is moderately susceptible to pitch canker while both P. pinaster and P. elliotiiarea highly susceptible. Overall, outbreaks affecting P. caribaea would possibly be less severe than outbreaks affecting P. pinaster, P. elliottii or P. radiata.
It is expected that the number of countries with pitch canker will increase over the next decade as the disease’s range increases. This is of great concern to New Zealand and Australia, as well as other pitch canker-free countries. Therefore, continued vigilance and monitoring for this disease is of the utmost importance to prevent the establishment of pitch canker under future climate scenarios. 

Drought Effects on Damage by Forest Insects and Pathogens: a Meta-Analysis

Climate change may decrease summer precipitation and increase winter precipitation across the Northern Hemisphere, resulting in dire consequences for forest ecosystems since summer drought damages tree growth and forest ecosystem functioning. In addition, prolonged droughts may also trigger more frequent or severe outbreaks of forest insects and pathogen epidemics, and these events could interact with carbon starvation or hydraulic failure to further increase rates of tree mortality. Moreover, there is considerable variation in the magnitude and direction of responses to water stress by pathogens and insects. Therefore, to draw general conclusions about tree drought-damage, insect-damage, and pathogen-damage relationships, Jactel et al. (2012) conducted a meta-analysis of published primary studies that addressed the impact of water stress on forest pest and pathogen damage. More specifically, the authors estimated the overall effect of water stress on insect pest and fungal damage in forest trees and investigated the variation of response to water stress among functional groups of pests and pathogens. They also explored the relationship between the magnitude of pest or pathogen damage and the severity of drought. Jactel et al. found that primary damaging agents living in wood caused lower damage to water-stressed trees, while primary pests and pathogens living on foliage caused more damage to water-stressed trees, in all cases irrespective of stress severity. Damage by secondary agents increased with stress severity. Overall, insect and fungus feeding behavior, affected tree part, and water stress severity were the three main predictors of forest damage in drought conditions.—Megan Smith
Jactel, H., Petit, J., Desprez-Loustau, M.L., Delzon, S., Piou, D., Battisti, A., Koricheva, J. 2012. Drought Effects on Damage by Forests Insects and Pathogens: a Meta-Analysis. Global Change Biology 18: 267 – 276. DOI: 10.1111/j.1365-2486.2011.02512.x

Numerous textbooks have described the variable responses of forest pests to tree water stress, most of which is related to insect feeding guild. Generally, bark beetles and woodborers perform better under severe drought scenarios, while sapsuckers also benefit from water-stressed trees under moderate drought conditions. The effect of drought on leaf miners, leaf chewers, and gall makers is uncertain. Pathogenicity may also be enhanced or reduced with increased drought. Furthermore, the duration and severity of water stress influences insects’ and pathogens’ responses to drought. One scientific study found that infections are more likely to develop during or after prolonged drought stress.
To draw general conclusions about the diverse drought-damage relationships between pathogens, insects, and trees, the authors conducted a meta-analysis of published primary studies that investigated the impact of water stress on forest pest or pathogen damage. Meta-analysis is a set of statistical tools that combines the outcomes of independent studies to evaluate the overall effect of a particular factor. It also tests the influence of covariates on this effect.
For their meta-analysis, the authors collected published studies that compared pest or disease damage on water-stressed vs. control trees. Studies were included in the analysis if they met specific criteria. One criterion stated that the study must have assessed tree damage caused by an insect or fungal pathogen. Damage variables included measures that quantified impact on tree survival or tree growth by recording the amount of damaged or consumed tree tissues, the number of attacks per tree, or the percentage of infested or killed trees. Studies were also included if they reported any insect and fungal species that affected tree tissues or organs. The second criterion stated that the mean response variable (tree damage), a measure of the variance, and the sample size for both control and drought treatments must have been reported to be included in the meta-analysis. The third criterion affirmed that water conditions in the control and stressed group of trees must have been quantified using predawn leaf water potential with a pressure chamber. This ensured that two groups of trees were under different water supply conditions and that the methodology of water stress assessment was consistent across studies. Predawn leaf water potential values were used as indicators of water stress severity. Finally, the fourth criterion stated that the reported paired comparison between water-stressed and unstressed (control) trees must have been made under the same environmental conditions (besides water supply), on the same date and in the same area.
The effect of water stress on forest insect and disease damage was estimated by calculating Hedges’ das a measure of the effect size. A positive dvalue indicated higher damage on water-stressed trees than on control trees. The authors selected the one variable per comparison between water-stressed and unstressed trees that had the largest sample size or allowed the highest number of paired comparisons. Data was only used from the first year of the studies and only data from the first application of water treatments on the trees were utilized. If results were reported for 2 years but from two different, independent tree samples, data for each year were used as two separate comparisons.
Jactel et al. quantified water stress severity with four variables. The first two were calculated with the information provided in the retained papers as the difference or ratio between the mean predawn leaf water potential in water-stressed and control trees. The higher the absolute value of predawn leaf water potential, the more water-stressed the tree was. The other two variables represented the hydraulic failure in the trees. These variables were calculated as the difference or ratio between the mean predawn leaf water potential in water-stressed trees and the xylem (the vascular tissue in plants that conducts water and nutrients upward from the root) pressure inducing 50% loss in hydraulic conductance (P50) due to cavitation in the same tree species. P50, as a representation of cavitation resistance, is highly variable between species and correlated with plant drought tolerance (lethal water stress).
The authors split the dataset into subsets of different functional groups of insects or fungi depending on their feeding substrate, and separated insects or fungal species colonizing foliar organs involved in photosynthetic processes (leaves, needles) vs. those living in woody organs responsible for tree structure (bark, wood, roots). Then, they distinguished between insect and fungal species that develop on healthy trees (primary agents) and those that utilize trees in poor physiological conditions (secondary agents). Jactel et al. assumed that there would be four functional groups, but they were unable to find examples of secondary agents that damage foliar organs. Therefore, the study only included three functional groups. Species were also classified based on their trophic guild: chewing, boring, sucking and galling insects, leaf pathogens, root and bark rot, blue-stain fungi, and endophytes.
Studies were categorized as observational or experimental depending on whether the drought was caused by natural conditions or controlled water supply. Additionally, the authors distinguished between comparisons made in the field and those made in protected conditions in absence of natural enemies.
Effect sizes across all comparisons were combined using the random effects model to yield the grand mean effect size (d++). The effect was considered significant if the bootstrap confidence interval, calculated with 9999 iterations, did not include zero. The mean effect size (d+) and 95% bias-corrected bootstrap confidence interval were calculated for each functional group of forest insect or fungi combining affected tree organ and its physiological status. A mixed effect model was used to test the between class heterogeneity and to test for significance of class effect. A P value of 0.001 was used to test for statistical significance. A mixed model was also used to test the relationship between the differences in damage on stressed vs. control trees (effect size) and severity of water stress (continuous variable).
Finally, the authors calculated a fail-safe sample size that represented an estimate of the number of non-significant, unpublished, or missing studies that would need to be added to the analysis to make the overall test of an effect statistically non-significant. After, testing, the authors found that their results were unlikely to be affected by publication bias.
Jactel et al. derived 100 comparisons of forest pest and disease damage on water-stressed vs. unstressed trees from 40 publications and reports. They involved 27 insect and 14 fungus species. A total of 26 tree or shrub species were studied. Overall, water stress resulted in higher forest pest and disease damage. The grand mean effect size equaled 0.23 and was significantly different from zero. However, an effect size of 0.2 is considered a small effect. Additionally, 40% of the individual effects were also negative, indicating lower damage in water-stressed trees than un-stressed trees. A graph displaying Hedges’ d effect size of 100 individual studies was constructed. Negative effect sizes indicated that drought resulted in lower damage.
The type of trophic substrate used by forest pest and pathogens had a highly significant effect on the difference in damage between water-stressed and unstressed trees. Primary damaging agents living on foliar organs caused higher damage in water-stressed trees than un-stressed trees, irrespective of stress severity. Drought did not exacerbate damage caused by primary agents that developed on woody organs. However, it did increase damage caused by secondary agents that developed on woody organs. A table displaying the mean Hedges’ effect size per functional group of forest pest and pathogen was constructed.
The effects of trophic guild were never significant within each functional group of forest pests and pathogens. Damage caused by sucking and boring insects and root and bark rot fungus species developing in woody organs in healthy trees (primary agents) was not worsened by drought. Drought resulted in slightly higher damage caused by leaf pathogens living on foliar organs in healthy trees and galling and chewing insects. Additionally, endophytic fungi damage increased with drought, but the results were not statistically significantly different from zero for boring insects and blue-stain fungi. Contrary to previous studies, these results suggest that the effect of water stress on the level of damage by forest pests and pathogens depends more on the type of substrate they use rather than on their feeding guild. A table displaying the effects of drought on mean effect size (damage) by different types of forest pest and pathogens was constructed.
After testing the effect of the type of water stress application on level of damage for each functional group of forest pests and pathogens separately, the authors found that there was no significant difference in mean effect size between observational and experimental studies. There also was no difference between studies made in the field or in protected conditions.
Finally, Jactel et al. tested the effect of water stress severity on level of damage for each functional group of forest pests and pathogens separately. Water severity did not affect the level of damage in water-stressed trees for any primary damaging agent. However, water stress severity did affect the level of damage caused by secondary agents living in woody organs. The variable that best explained damage variation was the ratio between observed predawn leaf water potential in the stressed trees and the species-specific index of drought tolerance (P50). Damage was consistently higher in stressed trees with a predawn leaf water potential higher than 30% of P50. Below 30%, water-stressed trees were more likely to have less damage than unstressed trees. Both secondary fungi and insect species living in woody organs similarly affected the water-stressed and unstressed trees (there was no significant difference between the two groups). A graph displaying the relationship between levels of damage (effect size) made by secondary forest pests and pathogens living on woody organs and water stress severity was constructed.
Previous studies have found that insect performance response to water stress depends on their feeding traits. Borers are known to perform better on stressed plants while gall makers and leaf chewers are negatively affected. However, one study found that sapsuckers benefited from drought while another study’s results contradicted these findings. Furthermore, drought is thought to negatively affect forest pathogens because fungi require high humidity conditions for spore dispersal, germination, and infection. Overall, the combination of effects on the performance of the biotic agent and effects on tree response could explain discrepancies in the results between different studies.
Drought can affect the nutritional quality of host trees for herbivorous insect and fungal pathogens through changes in water, carbohydrates, and nitrogen contents. Water supply greatly influences carbohydrate photosynthesis and therefore the provision of sugars for insects and parasitic fungi. As a result of drought, the reduced concentration of carbohydrates in conifer bark tissues may reduce the development of bark beetles and blue stain fungi. Furthermore, reduced water content and protein hydrolysis lead to higher nitrogen concentration in tree organs during drought. Since nitrogen is a limiting nutrient for many insects, an increase in plant nitrogen during water stress could improve the performance of phytophagous insects. For example, defoliator performances are higher in moderately water-stressed trees due to higher concentration of soluble nitrogen in foliage. Sap feeding insects also benefit from an increase in nitrogen.
Moreover, amino acids can be found in increased concentrations in water-stressed trees, stimulating the growth of bark canker fungus. As a result, the concentrations of carbohydrates and nitrogen decrease in the stem of the tree under moderate stress. This would limit the performances and then the damage of primary pests living on woody organs. Performance and damage made by primary pests living on foliar organs (which benefit from higher nitrogen content) would increase.
Water-stress also affects host metabolism involved in resistance to pest and pathogen damage. Tannins utilized in tree resistance can be found in higher concentrations in foliage of water-stressed trees, which deter leaf chewers such as beetles and lepidopteron. In contrast, resistance mechanisms may also be less effective in water-stressed trees.
Lower water supply affects sap flow and oleoresin production and pressure, which results in lower resistance to primary attacks of many bark beetles. Infection of pathogenic blue-stain fungi by scolytids leads to the development of necrotic lesions containing concentrations of terpenoid and phenolic chemicals that are toxic to insects and fungi. However, water-stressed trees lack the carbohydrates reserve to fuel the secondary metabolism involved in these resistance processes. Therefore, severely water-stressed trees are likely to be more damaged by secondary pest and pathogens like wood boring insects and blue-stain fungi.
Conversely, moderate water stress could lead to increased resistance. Because a tree’s carbohydrate pool still increases under moderate water stress, the tree may instead allocate its carbohydrates to the synthesis of defensive secondary chemicals rather than to growth and development. Secondary pests living in woody organs, like bark beetles, would then cause less damage in moderately water-stressed trees.
Additionally, decreased water content in severely stressed trees could lead to tougher foliage, resulting in lower herbivory by chewing insects.
Overall, the authors’ results confirm that drought does not systematically result in higher biotic damage. Two factors that explained the tree damage response to drought were type of feeding substrate for forest insect and pathogens and water stress severity. 

Forward–Looking Forest Restoration Under Climate—Are U.S. Nurseries Ready?

Scientists have predicted that climate change-induced variations in precipitation levels and temperature will result in range shifts for tree species across the United States. Therefore, many land managers are beginning to assess climate change predications when planning restoration plantings by increasing the proportion of plant species that will be favored under climate change. As such, forward-looking restoration will involve selecting varieties of species adapted to warmer conditions and variable rainfall levels. State nurseries often supply the species for restoration activities undertaken by state natural resources agencies and land trusts that require hundreds or thousands of saplings of one or more species. Tepe and Meretsky (2011) investigated whether state and large-scale commercial nurseries currently offer species that would support forward-looking restoration. The authors collected information about state and commercial nurseries in the 48 contiguous and subdivided the nurseries into region units. The results indicate that of 30 state nurseries, only 20% claimed they considered climate change when planning for which tree species to carry. However, 26 nurseries (87%) stated that they carried species with ranges that extended south of their state. At least 21% of nurseries that did not plan for climate change mentioned that they would like to, while approximately 13% stated that they introduced or researched new species tolerant to new climate conditions. Barriers to forward-planning restoration included customer demand, policies and laws regarding seed zones, and the uncertainty regarding the future climate. Ultimately, researchers, policy makers, nursery managers, land managers, and nursery clients must discuss how predicated future climate change will affect forests, how nurseries can work within the legal framework to plan for future conditions, and how clients can be encouraged to plant tree species that are tolerant to future predicated climatic conditions.—Megan Smith.
Tepe, T.L, and Vicky J. Meretsky, 2011. Forward–Looking Forest Restoration Under Climate Change—Are U.S. Nurseries Ready? Restoration Ecology: 19(3), 295–298. DOI: 10.1111/j.1526-100X.2010.00748.x

Forest restoration involving tree planting takes place after the conservation purchase of farm and pastureland. Additionally, it follows forest harvest or disturbances such as forest fire or damage from ice storms or tornadoes. Reforestation during early successional stages allows land managers to shift species composition of the resulting forest in a way that could accommodate climate change. However, the process of forward-looking restoration requires a source of trees that may not be native to the restoration site or to the state.
To assess if state nurseries offer species that would support forward-looking restoration, the authors obtained information concerning state and commercial nurseries in the 48 contiguous states from November 2008 to May 2009. Tepe and Meretsky determined which states had state nurseries using Internet searches. They then contacted each nursery or state forestry agency and spoke to at least one state nursery in every state with state nurseries (n = 31). Nurseries were asked whether they currently sold species whose ranges extended south of the state boundaries (species able to tolerate warmer conditions than those found in the state), and whether climate change was considered when stocking the nursery. The nurseries were then subdivided into region units. A map displaying the division of states used in regional analysis was constructed.
Additionally, the authors contacted commercial nurseries within each region based on recommendations from the state nursery staff and forestry practitioners. These commercial nurseries (n = 8) are of adequate size to provide stock for large-scale plantings. The authors also asked about the availability of species with ranges south of the nursery location, and about climate change considerations regarding the choice of nursery stock.
            Tepe and Meretsky found that of the 48 original states, 30 states had one or more state nurseries. Southeastern states were more likely than southwestern states to have them. Of these 30 state nurseries, 6 (20%) stated that they considered climate change when planning for which tree species to carry. However, climate change considerations were rarely in use in all the regions. Only one in eight commercial nurseries suggested that they considered climate change in planning. In contrast, approximately 87% of nurseries claimed they carried species with ranges that extended south of their state.
            Five nurseries out of the 24 nurseries that did not plan for climate change mentioned that they were thinking about doing so or would like to do so. Eight nurseries not planning for climate change were aware of academic discussions regarding climate change affecting forests. Two nurseries said climate change was not among the issues they would consider in the near future. Interestingly, four state nurseries (13%) stated they considered disease-tolerant species and another four stated they introduced or researched introducing new species and individuals tolerant to new climate conditions. Eight nurseries mentioned climate regions, seed zones, and accompanying regulations as barriers to planning for climate change. Other nurseries claimed that planning for customer demand was a priority over other planning considerations. Finally, two nurseries mentioned the possibility of a carbon market increasing demand for trees and changing the species that would be in demand. A histogram displaying the ratios of states in region, states with nurseries, and nurseries incorporating climate change was constructed.
 These results demonstrate that most state nurseries are not actively planning for climate change. Most have not addressed the implications climate change may have for their industry or for their ability to provide trees to restore forests under climate change scenarios. Western states have the largest proportion of state nurseries planning for climate change, which may reflect the climate change impacts already affecting forest health in this region. Yet, not all nurseries in the west are planning for climate change.
Although many nurseries lack active planning in regards to climate change, many nurseries carry more than 10 tree species in their stock that have ranges south of state boundaries. This reflects the practice among nurseries of obtaining tree stock from 240 kilometers south of their planting areas.
Overall, nurseries mentioned three major obstacles to adapting to climate change. The first involved uncertainty regarding what the state’s future climate would be. The second took into account the existence of current laws or policies constraining planting decisions, which in part influenced the third obstacle: client demand for traditional seed stock.
Policies related to seed zones were identified as a constraint to using or supplying seed stock other than local genetic and climatic variants. Seed zones are regions that are uniform in climate and soils. They were first established by western states to maximize the success in planting and to minimize the dilution of adapted genotypes. If enforced, only seeds originating in a particular zone can be used for plantings in that zone. Yet, climate change is predicted to establish new conditions and ecosystems, and seed zones are not currently responsive to climate change. Forward-looking restoration that uses a mix of genetic material to produce resilient flora may have better success than restoration limited by present seed zones. Ultimately, changing existing laws and policies will require efforts involving researchers, policy makers, nursery managers, land managers, and other nursery clients. These groups should assess how predicted climate conditions will affect forests, how nurseries can work within the current or revised legal frameworks and plant for future climatic conditions, and how clients can be encouraged to plant climate-change tolerant tree species.
The most promising forward-looking restoration strategy involves planting with a diverse species mix so that each species is favored by some, but not all, likely future climatic conditions. Despite the obstacles described above, some nurseries have been coordinating with geneticists and other researchers to experiment with genotypes that may be more resilient to temperature and moisture extremes. 

Evaluating Cumulative Effects of Logging and Potential Climate Change on Dry Season Flow in a Coast Redwood Forest

Silviculture techniques such as clear-cutting and selective logging are known to enhance runoff over a short time period. However, recent studies indicate that post-logging flow levels may instead drop below pre-logging flow levels as the forest re-establishes. These changes in flow levels can affect downstream water supply and modify habitat conditions for stream biota. As climate change alters rainfall regimes, scientists have become concerned that climate change may strengthen the effects of silviculturally induced flow changes. Using pretreatment calibrations between summer flows and antecedent precipitation indices (APIs), Reid and Lewis (2011) modeled the effects of six altered rainfall regimes on dry-season flows when combined with flow changes caused by selective and clear-cut logging within watersheds at Caspar Creek. The authors found that summer flows increased at a faster rate after selective logging when compared to clear-cutting. Yet, clear-cutting resulted in a larger increase in dry-season flow. Afterwards, the selective logging watershed experienced depressed flows while the summer flow in a partially clear-cut watershed remained higher than expected after logging. After a 22% reduction in annual rainfall, the area experienced a 23% reduction in flow under unlogged conditions, and a 41% flow decline after selective logging. Under the current climate regime, lower flows would be expected twice as often during the post-logging period. Additionally, the results indicate that a shift in seasonal rainfall distribution may increase or decrease dry-season flows when combined with logging effects even if annual rainfall levels remain constant. Overall, the authors’ findings suggest that forest managers should employ watershed-scale silvicultural strategies to reduce the risks of adverse dry-season flow changes when combined with the effects of varying rainfall levels.—Megan Smith
Reid, L.M., and Jack Lewis., 2011. Evaluating Cumulative Effects of Logging and Potential Climate Change on Dry-Season Flow in a Coast Redwood Forest. < http://www.fs.fed.us/psw/publications/reid/psw_2011_reid001.pdf>.

Water flow has been measured since 1962 at gaging weirs in the North and South Forks of Caspar Creek using a sequence of float gages (recorded using strip charts) and data-logged pressure transducers. Between 1971 and 1973, the 424ha South Fork watershed underwent 67% volume-selection logging. Throughout 1985 and 1986, 13% of the 473ha North Fork watershed was clear-cut. In 1989–92, an additional 37% of this watershed was clear-cut. Marine sandstones and siltstones underlie both watersheds, and most slopes are covered by 0.5–1.5 m deep clay-loam to loam soils. Annual rainfall averages 1,170 mm, and half runs off as stream flow. Approximately 95% of the rain falls between the months of October and May. This period also accounts for 95% of runoff. Minimum flow occurs in early October, but most streams are dry by June. As of 1960, the watersheds support 60–100 year old second growth stands that are dominated by coast redwood and Douglas fir. A map displaying the Caspar Creek Experimental Watersheds was constructed.
Calibrations established between North and South Fork flows for the pre-logging period were used to estimate expected South Fork flows after logging, and the observed deviations from expected values characterized the initial South Fork dry-season flow response to selective logging. However, because logging began in the North Fork after 1985 and South Fork flows had not returned to pretreatment levels at this time, the North Fork now lacked a paired control watershed. Therefore, treatment effects from the new North Fork experiment were unable to be evaluated at the weir gage. As a result, subwatersheds were utilized as controls. Gaging flumes were installed within these subwatersheds in 1984, and the area was logged soon after. However, subwatersheds were not gaged during summer months because the sites run dry. So, dry-season flow analysis depended on gaging records from the North and South Fork weirs.
The authors derived a method to estimate the expected dry-season flows at the weirs after the 1985 logging by using rainfall levels to predict pretreatment flows. An antecedent precipitation index (API) was used to predict these pretreatment flows. Reid and Lewis collected rain data from the Fort Bragg gage for summer rain records and from the South Fork gage for winter records. The records were combined to construct a continuous rainfall record from 1962 through 2008. Then, a suite of APIs with recession coefficients ranging from 0.993 to 0.600 was calculated from the rainfall record.
Because late-summer flow data were unavailable for years when weir ponds were drained, the authors selected three to five dates in August and September (for years with dry-season data) that had no rainfall during the previous 3 days, had greater than 9 mm of rain during the previous 30 days, and were more than 6 days a part. Mean daily flows on the selected dates during the pre-logging periods were regressed against the suite of APIs to identify the API that best predicted observed flows at each site. These calibrations resulted in estimations of expected August and September flows at the South Fork weirs (Les, L/km2-s) and the North Fork weirs (Len) for pretreatment (unlogged) conditions. Les = 0.0143API0.985 – 0.0320 and Len = 0.0272API0.977 + 0.0366. API0.985 was calculated using a recession coefficient of 0.985 and API0.977 was calculated using a recession coefficient of 0.977. The flow changes after logging treatments were calculated using ratios of observed flows to those expected for forested conditions. Graphs displaying calibration relations between late-summer flows and APIs in the North Fork and South Fork were constructed.
To evaluate interactions between logging-related flow changes and those arising from potential climate change, the authors constructed six plausible rainfall regimes by modifying the existing rainfall record to reflect altered annual rainfalls and changes in the seasonal rainfall distribution. Scenarios were selected within the observed range of variability so that the API model could describe them. Indirect interactions between altered rainfall effects and other changes climatic attributes were not considered.
The 24 wettest years on record showed an annual average 22% higher than the 48 year average. Therefore, the authors constructed one 48-year record by multiplying the recorded daily rainfalls by 1.22 and by 0.78. Rainfall in April and May accounted for an average of 10.4 percent of the annual rain over the 48-year record and 14.9 percent during the 24-year of that record with the highest percentages. Another record was constructed by increasing April–May daily rainfalls by a factor of 14.9/10.4 and 5.8/10.4 while multiplying rainfalls in other months by 85.1/89.6 and 94.2/89.6.
June and July account for 1% of the annual rainfall at Caspar Creek. Years with lower than the median proportion of summer rain show a mean percentage of 77% lower than average. Two additional records were constructed that reflected a 77% increase and decrease in June and July rainfall without modifying annual rainfall levels.
For each rainfall sequence, rainfall in August was set to 0 and values of API0.985 and API0.977 were calculated for September 1 of each year. Equations 1 and 2 were used to estimate expected flow under unlogged conditions at each weir on that date for each of the six API sequences. The proportional logging-related changes in flow, having been defined as a function of time after clear-cutting or selective logging, were applied to the climatically altered flows predicted for unlogged conditions to estimate the combined effects of logging and hypothetical changes in rainfall.
Reid and Lewis found that late summer flows increased after selective logging in the South Fork and remained high after eight years. Fifteen years after selective logging, flows dropped below levels expected for unlogged conditions. These levels continued to drop until 1992, twenty-one years after logging. Although flows have increased since then, the flow levels have remained slightly lower than pre-logging levels.
Flows within the North Fork watershed took longer to respond after clear-cutting. After 11 years, the proportional increase in flows reached a level similar to the maximum at the South Fork. The maximum mean increase at 11 years is equivalent to a 1.57% increase per percent of forest logged for the 50% clear-cut North Fork. This rate was 1.2 times the maximum 1.33% increase per percent of forest removed by 67% selection logging at the South Fork. North Fork flow levels dropped to pretreatment levels 19 years after clear-cutting. A graph displaying flow changes after South Fork selective loggin and North Fork clear-cutting was constructed.
The difference between selective logging flow levels and clear-cutting flow levels can be attributed to a difference in the distribution of trees that remain after logging. In second-growth redwood forests, clusters of trees share a common root system. When neighboring trees are selectively logged, the remaining trees have the root system in place to take advantage of soil moisture that is no longer utilized by the logged trees. Dry-season flow dropped quickly as the remaining trees used up excess moisture, and continued to drop as the newly established young trees grow larger. However, on a clear-cut slope, the nearest trees are off site. As a result, a large amount of regrowth must occur onsite before the excess soil moisture can be fully used. Thus, dry-season flow remains elevated longer than in selectively logged watershed.
The authors’ findings also reveal that the 22% change in annual rainfall produced the largest effect on flow. A reduction in rainfall would lead to a 23% decrease in the 10th percentile September 1 flow under unlogged conditions. Twenty-one years after selective logging, the 10thpercentile flow declined to 41% of unlogged levels, compared to 54% under the present rainfall regime. For unlogged conditions under the current climatic regime, a September 1 flow <0.55 L/km2-s could occur once in five years. Yet, lower flows would be expected twice as often during the 36-year post-logging period. A decrease in average flow for the post-logging period would be similar to that expected form a 10% decrease in mean annual rainfall under unlogged conditions.
Recent rains affect APIs more strongly than earlier ones. So, a shift in seasonal rainfall distribution could influence dry-season flows even if the overall annual rainfall levels remained constant. A 44% reduction of spring rainfall (and corresponding increases of rainfall in other months) would reduce the 10th percentile flow at 21-years after selection logging to 47% of forested levels. Additionally, a 2.5% increase in annual rainfall, if it only occurred in May, would affect September 1 flows by 10%.  Similar results were obtained when rainfall was increased by 46% and 83% in March and January. Overall, rainfall occurring after February had more influence on dry-season flow than rainfall occurring earlier in the wet season.
A 77% reduction of June and July rainfall (with no change in annual rainfall), produced one half the flow reduction as that caused by the 44% decrease in April–May rain. However, the results also suggest that summer rainfall may have a smaller effect on dry-season flows than the API models predicted. After a seasonal soil moisture deficit accumulates, a higher proportion of rainfall may be stored in the soil and transpired before it contributes to runoff.
 Modeling the same changes in rainfall regimes in combination with clear-cut logging can only be assed for the initial 19-year period of flow increase at the North Fork. Seasonal redistribution of rainfall affected the post-logging response even without a change in annual rainfall. The influence of rainfall late in the wet season was more pronounced for the North Fork than for the South Fork. Graphs displaying the modeled responses of September 1st weir flows to logging and hypothetical rainfall changes were constructed.
Overall, these results indicate that forest managers should design watershed-scale silvicultural strategies that could reduce the risk of unfavorable dry-season flow alterations. 

Effects of Multiple Interacting Disturbances and Salvage Logging on Forest Carbon Stocks

Forests contain approximately 80% of aboveground terrestrial carbon. Therefore, minor alterations to carbon stocks or cycling in forests ecosystems may exacerbate global warming by increasing atmospheric carbon dioxide levels. Yet, current climate change is expected to increase the severity and frequency of stand-replacing disturbances such as wildfire and windthrow, which will ultimately decrease ecosystem carbon stocks over large areas for several decades. Previous studies have assessed the effects of individual disturbance on forest carbon storage. However, the consequences of multiple, interacting disturbances are relatively unknown. Using field data and statistical analyses, Bradford et al. (2011) quantified the changes in carbon stocks in ecosystem carbon pools (live biomass, snags, down woody debris, forest floor, and total ecosystem) that resulted from blowdown from windstorms, post-disturbance salvage logging, and wildfire in a forest in Northern Minnesota. Each disturbance was analyzed individually and in combination with one another. The authors found that total live carbon and carbon in live trees were highest in the control plot (no disturbance) and lower in all other treatments. Carbon in understory biomass was highest in the blowdown followed by salvage logging followed by fire plot (BSF). Carbon in snags and dead woody material was highest in the fire treatment while carbon in down woody debris and the florest floor was highest in the blowdown treatment. Total ecosystem carbon was highest in the control treatment, intermediate in the blowdown and fire treatment, and lowest in the blowdown and fire (BF) and BSF plots. Both blowdown and fire disturbances resulted in roughly equal decreases in live carbon and total ecosystem carbon. Fire further decreased carbon in the forest floor and down woody debris after blowdown, resulting in additional total ecosystem carbon losses. Salvage logging and fire after blowdown demonstrated similar results. Overall, these results indicate that increasing disturbance frequencies may challenge land management efforts to sustain and enhance ecosystem carbon stocks.—Megan Smith

Bradford, J.B., Fraver, S., Milo, A.M., D’Amato, A.W., Palik, B. Shinneman, D.J., 2012. Effects of Multiple Interacting Disturbances and Salvage Logging on Forest Carbon Stocks. Forest Ecology and Management (267), pgs. 209 – 214.

The study was conducted in Northeastern Minnesota along the Southern edge of the North American boreal forest ecotone, and within the Superior National Forest. On July 4, 1999, a large derecho (a widespread, straight-line windstorm) affected over 200,000 hectares of the Superior National Forest. After the storm, the US Forest Service began salvage logging operations to reduce fuel loads and mitigate wildfire. Salvage logging occurred between the fall of 1999 and the fall of 2002. Despite these efforts, a section of the region was burned by the Ham Lake Wildfire in 2007. Patchy, overlapping disturbance patterns resulted in five “treatments:” undisturbed control, blowdown only (B), fire only (F), blowdown followed by fire (BF), and blowdown followed by salvage logging followed by fire (BSF). Unburned salvaged areas were unidentifiable. The authors examined mature jack pine (Pinus banksiana) communities. A map displaying the location of the 1999 blowdown, the 2007 wildfire, and the study site was constructed.

Bradford et al. established six study sites in each study plot, creating 30 sites total. Eight of these sites contained previous data from an earlier investigation and were included in the author’s study. The remaining 22 sites were randomly selected. Although the authors lacked pre-disturbance data for most of their sites, they conducted comparisons of treatment stand structures and reconstructed structures for the other treatments (based on deadwood pools) with the plots with pre-existing data. As a result, the authors confirmed that all the sites were comparable regarding pre-disturbance stand structure and successional stage. Randomly selected sites were then ground-truthed for adherence to the expected forest type and disturbance combination. Within each site, 6–10 circular plots (200-m2) were established along a 40 x 40 m grid that originated from a randomly chosen starting point, creating 239 plots total.

Within each circular plot, standing live and dead trees (diameters greater than 10 cm at breast height), and saplings (stems sizes greater than 2.5 cm and diameters less than 10 cm at breast height) were recorded by species and diameter. Stems of shrubs and tree seedlings (stems smaller than sapling class) were tallied by species within a 10-m2 circular plot centered within each 200-m2 plot. Additional seedling data were collected in in 10-m2 plots located equidistant between each 200-m2 plot, resulting in 14 – 20 seedling plots at each site. Downed woody debris (DWD) on each plot was inventoried. Diameters of all DWD with measurements greater than 7.6 cm were recorded along a 32-m transect that passed through the middle of each 200-m2. The authors also collected samples of herbaceous vascular plant material, forest floor material, and soils at a set location within each 200-m2 plot.

The biomass of living and intact standing dead trees was calculated for all woody stems greater than 2.5 cm in diameter and breast height. The authors used species-specific allometric equations that were regionally derived. The biomass of broken standing dead trees was estimated using taper functions to determine large and small end diameters. Then, a conic-paraboloid formula was applied to determine the volume of the intact portion of the tree. Volume was converted to biomass using species-specific density values for decay class I taken from another author’s study. For unidentifiable standing dead trees, the authors used the average decay class II densities from all species present. Shrub and tree seedling biomass was calculated using species-specific allometric equations. Biomass calculations for these components included stems, roots, branches, as well as foliage, if it was alive. Dead woody debris biomass was calculated using Van Wagner’s formula. This formula was applied to the planar intercept data, adjusting for species and decay class-specific densities. The volume of coarse woody debris in decay class IV or V was adjusted for collapse using collapse ratios of 0.82 and 0.42. Carbon content was calculated from total biomass using species-specific values. Additionally, carbon was assumed to compose 50% of biomass for shrub species and unidentified down woody debris.

Separate mixed-model analyses of variance (ANOVA) were used to assess differences in carbon pool sizes among treatments. Disturbance combination was treated as a fixed effect and site was treated as a random effect. Analyses of variance was also calculated for carbon stored in live trees, understory (herbaceous plants, shrubs, and seedlings), live biomass (sum of live trees and understory), down woody debris (DWD), snags, dead woody material (snags plus DWD), forest floor, soil, and total ecosystem carbon (sum of live biomass, deadwood, forest floor, and soil). When the authors detected significant disturbance effects, post hoc Tukey’s honest significant difference tests were used for pairwise comparisons between disturbance types. All data were checked for normality and transformed before statistical analyses were undertaken.

Brandon et al. found that carbon in live biomass was stored primarily in live trees. Total live carbon and carbon in live trees were highest in the control plot (104 MgC ha-1) and lower in all other treatments (from 1 MgC ha-1 in the BSF plot to 21 MgC ha-1 in the blowdown treatment). Carbon in understory biomass was higher in the BSF treatment (1.3 MgC ha-1) than the control, blowdown, or fire treatments (0.2 to 0.6 MgC ha-1). Carbon in snags was much higher in the fire treatment than other treatments (53 MgC ha-1), and the BF treatment (19 MgC ha-1) was higher than the BSF treatment (1.7 MgC ha-1).

Carbon in DWD was highest in the blowdown stands (35 MgC ha-1), intermediate in the BF treatment (21 MgC ha-1), and lowest in all other treatments (9–10 MgC ha-1). Carbon in all dead woody material (snags and DWD) was highest in the fire treatment (62 MgC ha-1) and lowest in the BSF treatment (11 MgC ha-1). Carbon stored in the forest floor was highest in the blowdown treatment (35 MgC ha-1) and lowest in the BSF treatment (9 MgC ha-1). Furthermore, soil carbon means were not significantly different among treatments. Total ecosystem carbon was highest in the control treatment (177 MgC ha-1), intermediate in the blowdown and fire treatments (122 and 106 MgC ha-1), and lowest in the BF and BSF treatments (66 and 40 MgC ha-1).

Differences in carbon pools suggest that individual natural disturbances (blowdown and fire) resulted in similar decreases in live carbon (83 and 91 MgC ha-1) and total ecosystem carbon (55 and 71 MgC ha-1). Blowdown increased DWD carbon by 25 MgC ha-1 and forest floor carbon by16 MgC ha-1 and had no significant effect on sang carbon. However, fire increased snag carbon by 40 MgC ha-1. Compared to blowdown alone, fire following blowdown decreased both DWD and forest floor carbon by another 14 and 22 MgC ha-1, and resulted in additional total ecosystem carbon losses of 56 MgC ha-1. Similarly, salvage logging and fire after blowdown decreased carbon in both DWD and forest floor by another 26 and 39 MgC ha-1, and caused additional ecosystem carbon loss of 83 MgC ha-1 when compared to blowdown alone. A figure displaying carbon stocks for individual carbon pools and total ecosystem carbon for forest stands in each treatment was constructed, as was a figure demonstrating the consequences of windthrow, salvage logging, and wildfire on ecosystem carbon stocks.

These findings indicate that all combinations of disturbances resulted in lower total ecosystem carbon than the undisturbed control. Additionally, the results demonstrate that individual disturbances (wind and fire) resulted in similar shifts of carbon from live biomass to detrital pools, as well as similar losses of total ecosystem carbon. However, blowdown shifted live tree carbon into downed woody debris and forest floor pools while fire shifted this carbon into snags, which have slower decomposition rates than downed wood or forest floor. Total ecosystem carbon loss was attributed to the amount of carbon immediately lost from trees.

Overall, the authors’ results demonstrate that secondary major disturbances can cause substantial additional decreases in ecosystem carbon, though the magnitude may vary depending on the time between successive disturbances and disturbance types. Furthermore, the original blowdown disturbance increased carbon stored in DWD and forest floor, providing larger surface fuel loads that increased burn severity and carbon loss. Although forest floor and DWD carbon pools were elevated after the blowdown, the fire released all the carbon added to the litter in the forest floor and more than half of the carbon added to DWD. Salvage logging also modestly enhanced carbon losses. Post-blowdown salvage logging prior to fire decreased carbon in DWD to control levels, removing all of the carbon added to DWD in the blowdown. It also resulted in less carbon stored in snags. Yet, salvage logging did not significantly affect total ecosystem carbon despite its effects on individual carbon pools.

In conclusion, forest managers must address the issue that increasing disturbance frequencies may counteract efforts to sustain and enhance ecosystem carbon stocks, thereby exacerbating climate change by releasing more carbon into the atmosphere.