A -CI for a parameter based on observations is a set such that

This is a frequentist notion (frequentist statistics) as opposed to a Bayesian one. The Bayesian equivalent of CIs are credible intervals.

The interpretation of CIs is notoriously tricky. The correct interpretation is that if you repeat the experiment many times (where an experiment consists of drawing datapoints), then the parameter will be inside the interval about an -fraction of the time. The incorrect (but common) interpretation is that the parameter has a chance of being inside a given interval. This isn’t true: for each realization of , the parameter is either covered by the interval or it’s not.

CIs are fundamental tools in uncertainty quantification. They can be obtained directly, or via inverting hypothesis tests (see hypothesis testing and duality between hypothesis tests and CIs).

They also have some problems. If you repeatedly compute confidence intervals after receiving new data, your intervals will eventually contradict one another (with probability 1). That is, there will be at least two intervals with no overlap, . This implies they are not sequentially valid, in the sense that they do not guarantee that is contained for all with probability . This issue is solved with confidence sequences.

The name confidence intervals is a slight misnomer, in the sense that some methods will produce confidence regions instead of intervals. That is, the confidence set will not always look like a convex set. And sometimes the parameter space is discrete (eg conformal prediction), so the resulting confidence set is certainly not an interval.