Causal inference holds great promise for understanding real-world systems, but its success hinges on the assumptions we make. In this talk, I explore how differentiable causal discovery methods can serve as flexible tools for causal discovery. I show a practical example applied to climate data where causal representation learning is used. Another key idea is the introduction of typing assumptions—the idea that variables belong to semantic types, and that causal relations are constrained by these types. This perspective provides a structured way to encode domain knowledge and improves the identifiability of causal models in applied settings.