Solving kinematics word problems is a specialized task which is best addressed through bespoke logical reasoners. Reasoners, however, require structured input in the form of kinematics parameter values, and translating textual word problems to such structured inputs is a key step in enabling end-to-end automated word problem solving. Span detection for a kinematics parameter is the process of identifying the smallest span of text from a kinematics word problem that has the information to estimate the value of that parameter. A key aspect differentiating kinematics span detection from other span detection tasks is the presence of multiple inter-related parameters for which separate spans need to be identified. State-of-the-art span detection methods are not capable of leveraging the existence of a plurality of inter-dependent span identification tasks. We propose a novel neural architecture that is designed to exploit the inter-relatedness between the separate span detection tasks using a single joint model. This allows us to train the same network for span detection over multiple kinematics parameters, implicitly and automatically transferring knowledge across the kinematics parameters. We show that such a joint training delivers an improvement of accuracies over real-world datasets against state-of-the-art methods for span detection.