This work is concerned with introducing semantics into robotic mapping at different scales. To make profitable use of a mobile robot, it needs a map of its environment. Until now, these maps are mostly geographical or topological. For example, taking a 3D laser scanner yields a 3D point cloud of all structures in the observable environment of the robot. For the future, these maps need to be enriched by semantics, to give the robot real knowledge about its environment and to enable it to communicate this knowledge to others, be it humans or robots.