During my BSc in Economics I have read from time to time papers that discussed historical topics, such as this one on the adoption of new machines, or this one on the effect of religiousity on development or this on the origin of gender roles.
I was wondering which role do papers like those, that use econometric methods to study history and look for causal effects in the discussion of histroy? Would any historian accept their claims, such as the idea that some regions in France lagged behind because of religious schools curricula, and use them in a discussion?
And more broadly what is the role in historical research of inputs coming from other sectors that use different methods (such as econometric methods in this case)?
So I can only speak to my own subfield, which is late imperial China. There are now a lot of these econometric papers using various datasets that are floating out there to try to prove things. A typical question is something like "is Confucianism a significant contributor to economic growth?" Without naming names, I would say that the vast majority of these kinds of works suffer from three issues (both published works and manuscripts I've seen).
The historiography they cite tend to be old and often out of date or irrelevant or misleading. Economists aren't reading the latest history journals and instead use decades old theses as their starting point for the inquiry. Often this is just plain wrong, or already disproven. Also, they frequently don't capture the nuances needed to interpret the data. Let's say they use a national database on floods in historical records. Then they cite a paper on Yellow River water control about flood management and base their econometric model partly on that... Except the institution mentioned only maybe existed in that area, or for a few decades in time, but now that model is being applied to a database that might span centuries throughout the country. That's... Misleading at best.
The data sucks (because history), or doesn't measure what the author thinks it measures because they don't understand how it was collected or produced. Since historical data is often spotty, there are all kind of biases inherent in the datasets that are created which are then used by social scientists without paying attention to these problems. When a historian reads things based on these obviously flawed sets, we have to ask questions about the validity of the results. Sometimes the data is created by the social scientists by combining things they found in creative ways. One measure I've seen that they claim to estimate fervor in Confucianism was distance from the place where Zhu Xi, the founder of Neo-Confucianism, taught. That's like saying you can measure Christian fervor by measuring distance from Jerusalem or Rome. Let that sink in for a bit.
The studies are obviously just shopping for instrumental variable. I've seen a series of studies from the same people using mostly similar methodology, but one paper they use instrument X and the next with Y, always with some justification but, at the end of the day, you can easily poke holes in those justification (often because of points 1 or 2 above). Also, if you ask them for a reason why they change their instrument willy-nilly, as a colleague of mine had done before, they don't have an answer. Meanwhile, econ journals don't usually ask historians to review the articles, so this stuff don't get mentioned in peer review.
For a real life example that I saw recently, see this Twitter thread
https://mobile.twitter.com/zhangtaisu/status/1463565082894475266
Valid question, easily spotted by any historian. The study also basically replicates what Kenneth Pomeranz had done in his book The Making of the Hinterland (not cited in this paper even though it's on the exact same region, going back to point 1 above) decades earlier. In other words, they're proving a thesis already made just with a bit more data. No response from the author so far that I can see. I suspect they don't actually have an answer and Zhang Taisu was just being charitable.
So in short, at least in my field, no these studies are ignored by historians and when we read them (and we do) it's often pretty cringe worthy because they contain a lot of obvious problems. Once in a while you have people who are more careful and do a good job but in my experience that's quite rare