FEA Inter-operability
- JMB
- Topic Author
- Visitor
18 years 5 months ago #409
by JMB
FEA Inter-operability was created by JMB
I was talking to Jonas Forssell (a co-developer of the FEA
code IMPACT see: \"impact.sourceforge.net/\") and we got to discussing my goal of trying to bring the various public domain FEA codes to be able to interchange at least nodal, elemental, constraints & load data between each other. I have been dabbling with Code-Aster, Calculix, Elmer, Impact, Gmsh, etc and each one seems to use an input
file just a bit different from the other. My goal is simple. To enable a higher level of trust in public domain FEA code, one needs to able to validate the results across diverse codes. Being able to exchange at least the 4 fundamental pieces of info (nodes, elements, constraints &
loads) will go along way in facilitating the process.
If we go one step further in writing FEA code that will read some major formats say like NASTRAN, ABAQUS, IDEAS (unv) formats, we may have covered a significant ability to test public domain FEA codes against the published reference problems available in the above commercial (or other significant) FEA codes. Both of us thought it would be a good idea to start a discussion going on the subject.
Thank you Admin for hosting it here! Would you kindly post what you think is appropriate from your offline message to me? You had some very good points, which I had overlooked and they would be very valuable here.
Thanks
-JMB
code IMPACT see: \"impact.sourceforge.net/\") and we got to discussing my goal of trying to bring the various public domain FEA codes to be able to interchange at least nodal, elemental, constraints & load data between each other. I have been dabbling with Code-Aster, Calculix, Elmer, Impact, Gmsh, etc and each one seems to use an input
file just a bit different from the other. My goal is simple. To enable a higher level of trust in public domain FEA code, one needs to able to validate the results across diverse codes. Being able to exchange at least the 4 fundamental pieces of info (nodes, elements, constraints &
loads) will go along way in facilitating the process.
If we go one step further in writing FEA code that will read some major formats say like NASTRAN, ABAQUS, IDEAS (unv) formats, we may have covered a significant ability to test public domain FEA codes against the published reference problems available in the above commercial (or other significant) FEA codes. Both of us thought it would be a good idea to start a discussion going on the subject.
Thank you Admin for hosting it here! Would you kindly post what you think is appropriate from your offline message to me? You had some very good points, which I had overlooked and they would be very valuable here.
Thanks
-JMB
- Joël Cugnoni
- Offline
- Moderator
18 years 5 months ago #411
by Joël Cugnoni
Joël Cugnoni - a.k.a admin
www.caelinux.com
Replied by Joël Cugnoni on topic Re:FEA Inter-operability
Hi,
Your post is just confirming my own thoughts on the subject: Open Source NEEDS interoperability, with \"open\" file formats or at least good converters.
I think that now most of the open source FE codes are more or less mature but as you said, each of them has its own file format plus some partial support for \"external\" data files.
I am very happy that you thought of CAELinux.com to be a good place for discussion and I really hope that this subject will be addressed by a large audience. I have just created this forum to support this effort.
Concerning the question of the file formats, I have already spent some time to figure out what could be done, and here is a short \"summary\":
A. Use an existing format
select one or a set of existing file formats and try to write \"as-complete-as-needed\" (to be defined, see *1 !!) file translators:
- UNV file format: is in my opinion one of the best candidate as it is well defined, wide spread, and can contain FE models, FE results and also experimental data. But one has to clearly specify which datasets will be used (mandatory and non-mandatory datasets), and specify how the data will be coded especially for output results: data location (node, elem. nodes, int. pt, centroid), precision (single, double), real or complex. Another issue is how the node / element groups will be defined.
- MED file format: a well structured and FE oriented binary file format, already supported by Salome & Code-Aster, open format, libraries & examples are available, file format that already takes the specificity of FE models and can store several models & results (like UNV). BUT it is not widely supported and cannot offer a way to exchange data with commercial codes.
- GMSH file format: in my opinion a very simple data file format that provides just enough information and is very simple to implement in other codes. There is just the problem of how to define the element type nothing is supported now. Good point: can store either models, or datasets.
- NASTRAN: to be honest, I don't know much of this format.
- ABAQUS: can be good to transfer meshes (with very simple group definitions for example) but format as slightely changed with Abaqus v6 (concepts of parts & assemblies). I had written some converter based on this format, but it can be very painfull to have something very robust because the format losely structured (you can define all the keywords in random order, with variable number of options etc...). FIL or ODB file for results: not very practical for output.
B. define a new open standard file format
for example based on XML/DOM with the main key data that anybody needs to be transferred (see *1) and create modules to translate to/from other formats:
for example, we could define a mesh in this way (XML, can use a parser library to directly ge a data tree structure in the software) :
[code:1]
1,1.000,2.000,3.000 .....
1,2,3,4,5,6,26,7,8....
11,12,13,14,15,16,116,117,118....
1,12,543,234,212,2543,2234,12314,1231,3452,34563
......
1,24,654,23,645,14,45,124,457
......
[/code:1]
Final thoughts:
In my opinion, the best way could be to have a decent support of UNV and/or GMSH formats in all open source FE codes & a set of efficient tools to convert to/from UNV, GMSH and other formats. For example, UNV can be used to transfer data to/from commercial software & GMSH can provide some good link between Open Source softwares. Then, we could develop some modules to translate Abaqus Model files to GMSH or MED files to / from GMSH for example. Or we can just consider ONE single format, like UNV for example, and some converters.
But anyway, the KEY issue is to provide a AS-COMPLETE-AS-NECESSARY support for the selected format and not a very partial and incomplete support. For example, Netgen or GMSH can save meshes in a rather large number of formats, but in most cases the element and node sets are not saved in these formats!! In such a case, you will nether be able to transfer a model easily and that's really sad because with a few more lines of code, it could have been done in a very simple way.
Joël Cugnoni
www.caelinux.com
(*1): key data that needs to be transferred (in my opinion):
FE Model
Mandatory:
nodal coordinates, node sets, element type and connectivities, element sets
Optionnaly:
loads/constraints/material properties & assignments/sections (very difficult to standardize, node/element sets can be used instead of this option).
Results:
Mandatory:
we need the mesh (nodes, groups, connectivities) as well as the datasets (at least real single precision values at nodes)
Optionnally:
Datsets defined on element nodes, integration points, centroid or along paths etc...
Your post is just confirming my own thoughts on the subject: Open Source NEEDS interoperability, with \"open\" file formats or at least good converters.
I think that now most of the open source FE codes are more or less mature but as you said, each of them has its own file format plus some partial support for \"external\" data files.
I am very happy that you thought of CAELinux.com to be a good place for discussion and I really hope that this subject will be addressed by a large audience. I have just created this forum to support this effort.
Concerning the question of the file formats, I have already spent some time to figure out what could be done, and here is a short \"summary\":
A. Use an existing format
select one or a set of existing file formats and try to write \"as-complete-as-needed\" (to be defined, see *1 !!) file translators:
- UNV file format: is in my opinion one of the best candidate as it is well defined, wide spread, and can contain FE models, FE results and also experimental data. But one has to clearly specify which datasets will be used (mandatory and non-mandatory datasets), and specify how the data will be coded especially for output results: data location (node, elem. nodes, int. pt, centroid), precision (single, double), real or complex. Another issue is how the node / element groups will be defined.
- MED file format: a well structured and FE oriented binary file format, already supported by Salome & Code-Aster, open format, libraries & examples are available, file format that already takes the specificity of FE models and can store several models & results (like UNV). BUT it is not widely supported and cannot offer a way to exchange data with commercial codes.
- GMSH file format: in my opinion a very simple data file format that provides just enough information and is very simple to implement in other codes. There is just the problem of how to define the element type nothing is supported now. Good point: can store either models, or datasets.
- NASTRAN: to be honest, I don't know much of this format.
- ABAQUS: can be good to transfer meshes (with very simple group definitions for example) but format as slightely changed with Abaqus v6 (concepts of parts & assemblies). I had written some converter based on this format, but it can be very painfull to have something very robust because the format losely structured (you can define all the keywords in random order, with variable number of options etc...). FIL or ODB file for results: not very practical for output.
B. define a new open standard file format
for example based on XML/DOM with the main key data that anybody needs to be transferred (see *1) and create modules to translate to/from other formats:
for example, we could define a mesh in this way (XML, can use a parser library to directly ge a data tree structure in the software) :
[code:1]
1,1.000,2.000,3.000 .....
1,2,3,4,5,6,26,7,8....
11,12,13,14,15,16,116,117,118....
1,12,543,234,212,2543,2234,12314,1231,3452,34563
......
1,24,654,23,645,14,45,124,457
......
[/code:1]
Final thoughts:
In my opinion, the best way could be to have a decent support of UNV and/or GMSH formats in all open source FE codes & a set of efficient tools to convert to/from UNV, GMSH and other formats. For example, UNV can be used to transfer data to/from commercial software & GMSH can provide some good link between Open Source softwares. Then, we could develop some modules to translate Abaqus Model files to GMSH or MED files to / from GMSH for example. Or we can just consider ONE single format, like UNV for example, and some converters.
But anyway, the KEY issue is to provide a AS-COMPLETE-AS-NECESSARY support for the selected format and not a very partial and incomplete support. For example, Netgen or GMSH can save meshes in a rather large number of formats, but in most cases the element and node sets are not saved in these formats!! In such a case, you will nether be able to transfer a model easily and that's really sad because with a few more lines of code, it could have been done in a very simple way.
Joël Cugnoni
www.caelinux.com
(*1): key data that needs to be transferred (in my opinion):
FE Model
Mandatory:
nodal coordinates, node sets, element type and connectivities, element sets
Optionnaly:
loads/constraints/material properties & assignments/sections (very difficult to standardize, node/element sets can be used instead of this option).
Results:
Mandatory:
we need the mesh (nodes, groups, connectivities) as well as the datasets (at least real single precision values at nodes)
Optionnally:
Datsets defined on element nodes, integration points, centroid or along paths etc...
Joël Cugnoni - a.k.a admin
www.caelinux.com
- vugie
- Topic Author
- Visitor
18 years 5 months ago #418
by vugie
Replied by vugie on topic Re:FEA Inter-operability
Just after I read your posts I accidentaly found such project:
femml.sourceforge.net/
It is just an approach to develop kind of universal format for FEA software (with innteresting approach to material properties exchange).
I deal with FEM data exchange problems quite often as well with commercial as with open source software. Most of problems I have with results data. In commercial software results data format is often not documented at all making it impossible to make some custom analysis on such data.
I think that most important thing when talking about such project is to convience software manufactures that either documenting their formats or writing translators for universal format is profitable for them. This is obvious for open software.
This will make able valuable comparision between various FEA software and will increase trust level for open software and competivity for commercial one.
Wojtek Golebiowski (Vugie)
femml.sourceforge.net/
It is just an approach to develop kind of universal format for FEA software (with innteresting approach to material properties exchange).
I deal with FEM data exchange problems quite often as well with commercial as with open source software. Most of problems I have with results data. In commercial software results data format is often not documented at all making it impossible to make some custom analysis on such data.
I think that most important thing when talking about such project is to convience software manufactures that either documenting their formats or writing translators for universal format is profitable for them. This is obvious for open software.
This will make able valuable comparision between various FEA software and will increase trust level for open software and competivity for commercial one.
Wojtek Golebiowski (Vugie)
- roleic
- Topic Author
- Visitor
18 years 5 months ago #419
by roleic
Replied by roleic on topic Re:FEA Inter-operability
Hi,
concerning a standardized data format for FEM data exchange all text oriented formats like universal files etc. have the disadvantage of large size because you need several bytes for every digit resulting in dozens of bytes per floating point number where as binary file formats can represent the same complete number by e.g. 4 bytes in total.
For the application in FEM data exchange (and even more so in CFD) where huge files must be handled, a size factor of 5 or more still matters very much. The mark up language file formats like XML, XHTML, or the above proposed FemML make the size issue even worse by adding lots of tags with its syntax.
Therefore I plead for a standard binary format like HDF (Hierarchical Data Format by NCSF, National Center for Supercomputing Applications).
hdf.ncsa.uiuc.edu/
en.wikipedia.org/wiki/Hierarchical_Data_Format
It is platform independent. There are free and open tools for file translation, viewers, etc. available. HDF can also hold data description together with the actual data. Therefore with HDF you have all the advantages of text based formats combined with those of binary formats.
Why not try to implement a data structure like that of the universal files in the HDF format?
One more aspect seems important: Since multiphysics analysis is growing in importance (see also SALOME's support for this) such a FEM data exchange format should be extendable to allow not only classic mechanical FEM data exchange but also the exchange of CFD, thermal, electromagnetic and other kinds of data.
If such an efficient and omnipotent standard file format is multiplying the flexibility in academia and scientific open source community it might even draw the attention of the big shots in FEM industry and the standardizing boards...similar to the open document format ODF.
roleic
concerning a standardized data format for FEM data exchange all text oriented formats like universal files etc. have the disadvantage of large size because you need several bytes for every digit resulting in dozens of bytes per floating point number where as binary file formats can represent the same complete number by e.g. 4 bytes in total.
For the application in FEM data exchange (and even more so in CFD) where huge files must be handled, a size factor of 5 or more still matters very much. The mark up language file formats like XML, XHTML, or the above proposed FemML make the size issue even worse by adding lots of tags with its syntax.
Therefore I plead for a standard binary format like HDF (Hierarchical Data Format by NCSF, National Center for Supercomputing Applications).
hdf.ncsa.uiuc.edu/
en.wikipedia.org/wiki/Hierarchical_Data_Format
It is platform independent. There are free and open tools for file translation, viewers, etc. available. HDF can also hold data description together with the actual data. Therefore with HDF you have all the advantages of text based formats combined with those of binary formats.
Why not try to implement a data structure like that of the universal files in the HDF format?
One more aspect seems important: Since multiphysics analysis is growing in importance (see also SALOME's support for this) such a FEM data exchange format should be extendable to allow not only classic mechanical FEM data exchange but also the exchange of CFD, thermal, electromagnetic and other kinds of data.
If such an efficient and omnipotent standard file format is multiplying the flexibility in academia and scientific open source community it might even draw the attention of the big shots in FEM industry and the standardizing boards...similar to the open document format ODF.
roleic
- Joël Cugnoni
- Offline
- Moderator
18 years 5 months ago #420
by Joël Cugnoni
Joël Cugnoni - a.k.a admin
www.caelinux.com
Replied by Joël Cugnoni on topic Re:FEA Inter-operability
Yes, your are right, for results, it may become very important to have some binary format datafile, and for this task HDF provides a strong base.
If we follow this idea, the most interesting practivcal starting point could be to implement a support for MED files. MED file format is:
- a binary format for Meshes & results
- based on HDF v5 (convenient for data exchange)
- is already specialized for FE analysis with practical support for groups, datasets at various locations (nodes, element nodes, gauss points...)
- has been develop to ensure interoperability between SALOME, CODE-ASTER and several other simulations codes
- a library is provided in open source to read/write to/from MED files
But the option of a binary format is not always very convenient for Mesh manipulation as we sometimes need to edit the mesh in text editors, for example, to simply change the element type or define another node set etc...
So one option could be to focus on TWO alternative formats, one ASCII file format for \"small\" datasets (meshes, results for small models) and one binary format for efficient data exchange of large models.
In my opinion, the best candidates are Universal (UNV) file for ASCII format and MED for Binary.
If someone needs it I can provide some information on UNV file format and MED data format / libraries are described there: www.code-aster.org/outils/med/ (in FRENCH...)
I hope this will be helpful as a starting point.
If we follow this idea, the most interesting practivcal starting point could be to implement a support for MED files. MED file format is:
- a binary format for Meshes & results
- based on HDF v5 (convenient for data exchange)
- is already specialized for FE analysis with practical support for groups, datasets at various locations (nodes, element nodes, gauss points...)
- has been develop to ensure interoperability between SALOME, CODE-ASTER and several other simulations codes
- a library is provided in open source to read/write to/from MED files
But the option of a binary format is not always very convenient for Mesh manipulation as we sometimes need to edit the mesh in text editors, for example, to simply change the element type or define another node set etc...
So one option could be to focus on TWO alternative formats, one ASCII file format for \"small\" datasets (meshes, results for small models) and one binary format for efficient data exchange of large models.
In my opinion, the best candidates are Universal (UNV) file for ASCII format and MED for Binary.
If someone needs it I can provide some information on UNV file format and MED data format / libraries are described there: www.code-aster.org/outils/med/ (in FRENCH...)
I hope this will be helpful as a starting point.
Joël Cugnoni - a.k.a admin
www.caelinux.com
- JonasForssell
- Topic Author
- Visitor
18 years 5 months ago #434
by JonasForssell
Replied by JonasForssell on topic Re:FEA Inter-operability
Hello,
I'd be happy to support the recommended format in Impact, but documentation in Frensch is a no-go for me and the co-autors.
Is there a translation available for the .unv format?
Impact is written in Java. Are any of the formats you suggest available as libraries in Java?
/Jonas
I'd be happy to support the recommended format in Impact, but documentation in Frensch is a no-go for me and the co-autors.
Is there a translation available for the .unv format?
Impact is written in Java. Are any of the formats you suggest available as libraries in Java?
/Jonas
Moderators: catux
Time to create page: 0.175 seconds