This paper considers the problem of lossy compression for the computation of a function of two correlated sources, both of which are observed at the encoder. Due to presence of observation costs, the encoder is allowed to observe only subsets of the samples from both sources, with a fraction of such sample pairs possibly overlapping. For both Gaussian and binary sources, the distortion-rate function, or rate-distortion function, is characterized for selected functions and with quadratic and Hamming distortion metrics, respectively. Based on these results, for both examples, the optimal measurement overlap fraction is shown to depend on the function to be computed by the decoder, on the source correlation and on the link rate. Special cases are discussed in which the optimal overlap fraction is the maximum or minimum possible value given the sampling budget, illustrating non-trivial performance trade-offs in the design of the sampling strategy.