Robots operating in a complex human-inhabited environment need to represent and reason about different kinds of knowledge, including ontological, spatial, causal, temporal and resource knowledge. Often, these reasoning tasks are not mutually independent, but need to be integrated with each other. Integrated reasoning is especially important when dealing with knowledge derived from perception, which may be intrinsically incomplete or ambiguous. For instance, the non-observable property that a dish has been used and should therefore be washed can be inferred from the observable properties that it was full before and that it is empty now. In this paper, we present a hybrid reasoning framework which allows to easily integrate different kinds of reasoners. We demonstrate the suitability of our approach by integrating two kinds of reasoners, for ontological reasoning and for temporal reasoning, and using them to recognize temporally and ontologically defined object properties in point cloud data captured using an RGB-D camera.