A number of groups, from libraries and universities and academic projects are striving to implement flexible data management systems in order to harness the latest and greatest in semantic web technologies striving to integrate and facilitate breakthrough interdisciplinary analysis.
It is obviously know that every lab, every individual research group (regardless of the discipline) has developed internal data management systems that “work” (i.e. literature & data collection > excel > stats > graphing > word processer) but what has your lab found useful and what are your biggest frustrations?
Please feel free to comment below, or join the discussion on ResearchGate. or on NIF Blog @ http://blog.neuinfo.org/index.php/essays/lab-data-management-practices
3 thoughts on “What is your lab’s “Data Management” workflow?”
Response on ResearchGate from Tim Smith
I worked at a small company that had a system that was considerably more laborious but not considerably more useful, where all files were to be tucked away into a file folder in a deep and broad hierarchy of folders with assigned numeric prefixes. Trying to find someone else's data or understand what process they had worked through in Excel to produce an analysis tended to be an exercise in frustration. Institutional memory was "managed" by archiving Powerpoint slides from lab meetings, which were encouraged to be reasonably comprehensive and tended to include relevant parameters; they were considerably better than nothing.
My own personal Shangri-La contains a cross-platform network file system with robust support for tagging (manually or by introspection) files and directories, but maybe I'm speccing too narrowly.
Integrating into the Workflow?
Data Management Technology
Essentially, Data Management Technology refers to (generally big) databases, write my assignment for me jointly with software that enclose business explanations of that data (data glossary), and some unique accessing architectures, such as "trade intelligence" and data storehouses.