Most previous approaches to semantic mapping in robots have worked bottom-up: given the raw sensor data, objects or structures must be identified and the respective labels be added to the geometry map. We now propose to view the task differently: rather than building a geometry map with tags of known classes added, we instantiate a knowledge base by providing sensor data and spatial information concerning instances of object and aggregate categories contained in the knowledge base. The resulting combination of knowledge base and map we call an anchored knowledge base. The difference to previous semantic mapping approaches is that context-dependent top-down information can be generated from the knowledge base that helps the robot generate expectations about objects to-be-sensed, which, in turn, can help focus attention within the sensor data, disambiguate noisy data, and fill up occlusions. - In the talk, we will present first results concerning the generation of anchored knowledge bases.