Many statistical forecast systems are available to interested users. To be useful for decision making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and its statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of "quality." However, the quality of statistical climate forecast systems ( forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what quality entails and how to measure it, leading to confusion and misinformation. A generic framework is presented that quantifies aspects of forecast quality using an inferential approach to calculate nominal significance levels ( p values), which can be obtained either by directly applying nonparametric statistical tests such as Kruskal - Wallis (KW) or Kolmogorov - Smirnov (KS) or by using Monte Carlo methods ( in the case of forecast skill scores). Once converted to p values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. The analysis demonstrates the importance of providing p values rather than adopting some arbitrarily chosen significance levels such as 0.05 or 0.01, which is still common practice. This is illustrated by applying nonparametric tests ( such as KW and KS) and skill scoring methods [ linear error in the probability space (LEPS) and ranked probability skill score (RPSS)] to the five-phase Southern Oscillation index classification system using historical rainfall data from Australia, South Africa, and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. It is found that nonparametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system, or quality measure. Eventually such inferential evidence should be complemented by descriptive statistical methods in order to fully assist in operational risk management.