Building a Fixed Income Portfolio: Construction and Optimization Considerations

You have signals. You know which bonds look cheap, which have momentum, which issuers are high quality. Great. Now what?

Turning those signals into an actual portfolio is where things get real. Chapter 8 of Richardson’s book covers the full pipeline: from defining your investment universe, to cleaning up your signals, to running an optimizer, to making sure the final portfolio actually does what you want it to do. There are a lot of decisions along the way. And each one matters.

Start with the Sandbox

Before you rank a single bond, you need to decide which bonds are even eligible. Richardson walks through this using US high yield corporate bonds as the example. The benchmark (ICE/BAML H0A0 index) had about 2,000 bonds as of late 2020. But not all of them belong in a systematic portfolio.

First, liquidity. If a bond hasn’t traded in six months, there’s no point modeling how attractive it is. Your trading desk won’t be able to buy it. Richardson suggests prescreening your universe using data from sources like TRACE (which tracks US bond trades). Remove the illiquid stuff upfront. He argues there’s no evidence of a liquidity premium in public corporate bond markets anyway. So you’re not giving up expected returns by cutting illiquid bonds.

Then there’s seniority. Bonds at different levels of the capital structure have different recovery rates, which directly affects their spreads. If you keep all seniority levels, you need to adjust your signals. Otherwise your value and carry signals will be comparing apples to oranges.

Time to maturity matters too. Bonds about to mature will naturally leave the index. Buying short-duration bonds eats into your turnover budget for little benefit. And private issuers (about 29% of the US HY index) are harder to analyze since financial data is often unavailable. You don’t have to exclude them all, but you should set a minimum information threshold.

Every bond you exclude from the benchmark creates tracking error. That’s the statistical distance between your portfolio and the benchmark. And not all exclusions are equal. Dropping a bond with a high DTS (duration times spread, a proxy for risk) creates more tracking error than dropping a low-risk bond of the same weight.

The Investment Cube

Richardson introduces a helpful visual called the investment cube. Think of it as a grid. On one axis you have the breadth of investment themes: carry, value, momentum, defensive, and potentially others like sentiment or liquidity provision. On the other axis you have depth, ranging from simple measures to complex ones.

A simple momentum signal might be trailing six-month equity returns. A complex one might include related asset returns and fundamental momentum measures. A simple value signal might regress spreads on credit ratings. A complex one might use structural models or machine learning.

This cube is also useful for distinguishing smart beta from full systematic approaches. Smart beta lives in the left portion of the cube (simple measures of known themes). A full systematic process spans the entire face and keeps expanding it through research.

Cleaning Up Your Signals

Raw signals need work before they’re useful. Richardson walks through the transformation pipeline step by step, using equity momentum for corporate bonds as an example.

Z-scoring is the first step. Take your raw momentum measure, subtract the cross-sectional mean, divide by the standard deviation. This normalizes the signal so different measures are comparable. But a simple Z-score across all bonds creates problems. Momentum tends to correlate negatively with DTS (a proxy for market beta) because better-performing companies tend to have lower spreads. It also creates sector imbalances since some sectors naturally have more positive momentum than others.

Within-sector normalization fixes the sector problem. Instead of Z-scoring across all bonds, you Z-score within each sector. Now the long and short sides of each sector are perfectly balanced. But the beta tilt remains.

Beta neutralization removes that tilt. You regress the signal onto DTS, and the residual becomes your new signal. This ensures you’re not accidentally repackaging beta (market exposure) as alpha (skill). Richardson shows that even after all these transformations, the modified signal still retains a 0.77 correlation with the original raw signal. You’re keeping the information content while removing the stuff you don’t want.

For handling missing data, Richardson warns against a common mistake. If your signal is leverage (lower is better) and you assign zero to bonds where leverage can’t be computed, you’re accidentally saying you love those bonds the most. That’s almost certainly not your view. Better approaches include imputing values from similar issuers.

Signal Weighting: Keep It Simple

Once you have clean signals grouped into themes (carry, value, momentum, defensive), you need to decide how much weight each theme gets. Richardson’s recommendation is straightforward: start with equal weights. Only deviate if you have strong conviction based on unique data or methodology, or if certain themes have low correlation with each other (so you need to up-weight them to make sure they matter in the final blend).

He’s also skeptical of factor timing. The idea of increasing or decreasing your exposure to a theme based on its recent performance sounds appealing. But the evidence is mostly from equity markets where trading is cheap. In corporate bonds, where every trade is expensive, the modest improvement from timing probably doesn’t survive transaction costs. Plus, a diversified multi-theme approach already gets you some natural factor timing for free, since the correlations between themes shift over time.

The signal weights are expressed in risk terms. If your tracking error budget is 2% annualized (typical for high yield), you’re allocating that risk across your themes. Each theme’s portfolio gets scaled to target a specific volatility level so they’re comparable.

The Optimizer

At the core of the process sits an objective function. In simple terms: pick the portfolio weights that maximize expected returns while satisfying a bunch of constraints.

Richardson outlines a linear program with these constraints: all weights positive (no shorting for long-only), each bond’s weight can’t deviate more than 0.25% from its benchmark weight (forced diversification), the portfolio must be fully invested, turnover is capped at 10%, and deviations from benchmark spread and duration are limited.

A real-world version also needs a proper risk model. For 2,000 bonds, that means estimating volatilities and correlations for the entire universe. Common factor models are the standard approach for corporate bonds. Commercial providers like BARRA, Axioma, and Northfield offer these, but building your own gives you transparency and flexibility.

One important concept is the transfer coefficient. It measures how well your final portfolio reflects your signals. In a perfect world, the bond you like most gets the biggest overweight. In reality, liquidity constraints, position limits, and risk constraints all create friction. Richardson shows an example where the correlation between optimized weights and the signal is only 0.60. That gap is the cost of operating in the real world.

Rebalancing and Transaction Costs

How often should you rebalance? It depends on how fast your signals decay and how expensive trading is. Corporate bond trading costs are high relative to the asset class’s volatility. So you need to ration your turnover budget carefully.

Rebalancing gets triggered by fund flows, coupon accumulation, or drift from the optimal portfolio. Portfolio managers track the transfer coefficient over time. When it drops too far, meaning your actual portfolio has drifted from the ideal, it’s time to trade. But even then, timing matters. Don’t rebalance the day before a holiday or right before a Fed meeting.

Beta Completion

Here’s the final piece. Your systematic process is designed to generate alpha through security selection. But for a long-only benchmark-aware portfolio, you also need to deliver the benchmark’s beta, its exposure to interest rate curves, credit curves, and (for global mandates) currencies.

Richardson strongly recommends using derivatives for this. Interest rate swaps and futures handle rate risk. Credit index derivatives handle overall credit exposure. Currency forwards handle FX. Why would you distort your carefully optimized security selection portfolio just to match the benchmark’s duration or credit spread? Use cheaper instruments for that job and let your bond portfolio focus on what it does best: picking winners.

Crowding: Not a Problem Yet

Richardson addresses the common worry that systematic strategies will get crowded. His response: not in fixed income. Quantitative fixed income strategies manage about 3% of the assets that quant equity strategies do. The market is enormous and systematic approaches are still rare. Crowding could become a risk someday, but it’s not high on the list right now.

The Takeaway

Portfolio construction is where the rubber meets the road. You can have the best signals in the world, but if your universe is wrong, your signals aren’t cleaned properly, your optimizer is poorly constrained, or your transaction costs eat your alpha, none of it matters. Richardson makes clear that this is where craftsmanship separates good systematic investors from everyone else. Every decision in the pipeline, from liquidity screening to beta completion, is a choice that affects the final result.

Previous: Emerging Market Bonds: Hard Currency Security Selection

Next: Liquidity and Trading in Fixed Income Markets


Book: Systematic Fixed Income: An Investor’s Guide by Scott Richardson, Ph.D. Published by John Wiley & Sons, 2022. ISBN: 9781119900139.

About

About BookGrill

BookGrill.org is your guide to business books that sharpen leadership, refine strategy and build better organizations.

Know More