Next Article in Journal
A Comprehensive Data Pipeline for Comparing the Effects of Momentum on Sports Leagues
Previous Article in Journal
Understanding Data Breach from a Global Perspective: Incident Visualization and Data Protection Law Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Data Descriptor

Organ-On-A-Chip (OOC) Image Dataset for Machine Learning and Tissue Model Evaluation

1
Latvian Biomedical Research and Study Centre, LV-1067 Riga, Latvia
2
CellboxLabs Ltd., LV-1063 Riga, Latvia
3
Institute of Electronics and Computer Science (EDI), LV-1006 Riga, Latvia
*
Author to whom correspondence should be addressed.
Submission received: 27 November 2023 / Revised: 28 January 2024 / Accepted: 29 January 2024 / Published: 1 February 2024

Abstract

:
Organ-on-a-chip (OOC) technology has emerged as a groundbreaking approach for emulating the physiological environment, revolutionizing biomedical research, drug development, and personalized medicine. OOC platforms offer more physiologically relevant microenvironments, enabling real-time monitoring of tissue, to develop functional tissue models. Imaging methods are the most common approach for daily monitoring of tissue development. Image-based machine learning serves as a valuable tool for enhancing and monitoring OOC models in real-time. This involves the classification of images generated through microscopy contributing to the refinement of model performance. This paper presents an image dataset, containing cell images generated from OOC setup with different cell types. There are 3072 images generated by an automated brightfield microscopy setup. For some images, parameters such as cell type, seeding density, time after seeding and flow rate are provided. These parameters along with predefined criteria can contribute to the evaluation of image quality and identification of potential artifacts. This dataset can be used as a basis for training machine learning classifiers for automated data analysis generated from an OOC setup providing more reliable tissue models, automated decision-making processes within the OOC framework and efficient research in the future.
Dataset License: CC-BY-SA

1. Summary

Organ-on-a-chip (OOC) technology is a rapidly advancing field that merges microfluidic and in vitro cell culture techniques to model tissue. These systems are designed to control cell culture conditions and therefore mimic key aspects of human physiology. OOC technology offers the flexibility to change critical variables such as flow rate, shear stress, oxygen gradient and drug concentration or other compound exposure in a dynamic environment [1]. Additionally, in contrast to different 2D cell culture model systems, this technology allows mimicking of cell–cell, cell–extracellular matrix interactions, vascularization, mechanical stress, and the development of different diseases [2]. Subsequently, this microfluidic device can provide a constant flow that creates shear stress since shear stress is a well-known factor important for functional tissue development, for instance, results suggest that in gut-on-chip models, flow stress regulates Wnt antagonist Dickkopf-1 and Frizzled-9 receptors that induce villi and crypt formation into these models, therefore recapitulating the intestinal phenotype [3]. Furthermore, dynamic systems often replicate in vivo physiological conditions more closely, which can lead to more physiologically relevant results [4]. For instance, when subjected to dynamic conditions, 3D organoids derived from patients with high-grade serous ovarian cancer exhibited heightened sensitivity to paclitaxel [5]. This is promising technology also in the case of personalized medicine and drug development as it can be used to develop or select therapeutics personalized for individual patients and therefore identify drug candidates with a greater probability of success and thus shorten the clinical trial timeline [6]. OOC technology has shown great potential in cancer research by providing a platform to replicate the human tumor microenvironment in vitro [7]. It has been used to model early stages of cancer metastasis and intravasation of tumor cells into the vasculature, angiogenesis, and progression from early to advanced lesions involving epithelial–mesenchymal transition (EMT), tumor cell invasion, and metastasis [8].
Cell seeding density and daily monitoring play crucial roles in the development of functional models in cell culture. Furthermore, cell culture conditions vary widely for each cell type, emphasizing the importance of understanding the specific requirements for different cell types and the impact of seeding density on their behavior and function [9]. Overall, both cell seeding density and daily monitoring are critical factors in the development of functional models in cell culture, influencing cell behavior, and differentiation [10]. To monitor tissue, imaging methods can be used. Furthermore, the monitoring can be automated by means of machine learning (ML) methods, the use of which has been steadily increasing in various fields, including science and medicine [11]. Hence, ML can be used to analyze data and imaging generated within the OOC model. For instance, image-based machine learning can be used to monitor and improve performance of OOC models by classifying microscopy images acquired in real time [12]. Deep neural networks (DNNs), a group of ML methods that have achieved state-of-the-art results on perceptual tasks, are widely used tools for image classification and automated analysis; using them can increase the speed of the cell imaging process and improve the research process [13]. A DNN-based monitoring system for an OOC setup can provide a faster way to analyze data generated by OOC models and can help to develop more reliable tissue models with real-time monitoring and automated decision making. In particular, automating the setup for growing OOC by means of implementing ML methods would allow that process to be continuously monitored without human intervention. The present dataset provides an opportunity to train ML-based classifiers including DNN on a diverse dataset of real-world OOC microscopy images.

2. Data Description

The dataset consists of 3072 images of cells cultured in an OOC setup and a single spreadsheet with the metadata about these images. The cells in the dataset belong to the following six cell lines:
  • A549 (human lung adenocarcinoma alveolar basal epithelial cells, CCL-185, ATCC, Manassas, VA, USA);
  • Caco-2 (colorectal adenocarcinoma epithelial cells, HTB-37, ATCC);
  • HPMEC (human pulmonary microvascular endothelial cells; 3000, ScienCell, Carlsbad, CA, USA);
  • HUVEC (human umbilical vein endothelial cells, CRL-1730, ATCC);
  • NHBE (normal human bronchial epithelial cells, CC-2541, Lonza, Basel, Switzerland);
  • HSAEC (human small airway epithelial cells, PCS-301-010, ATCC).
For each cell image, there is a class label ‘good’ or ‘bad’, which corresponds to the assessment of the quality of the sample by four experts in cell biology. Images containing artifacts (deformation of channel walls, air bubbles, out of focus or blurry images) that impair the evaluation of cells are classed as “bad”. Images classified as “bad” are also where cell morphology and density do not correspond to cell line and cultivation conditions. The rest of the images are deemed to be of acceptable quality and therefore classificatory “good” is assigned. In case of discrepancy between raters, opinion of the majority was considered as final or an additional rater was attracted, if needed. The distribution of cell images by cell lines and ‘good’/’bad’ classes are given in Table 1, while representative images of A549 and HPMEC cell lines with class label ‘good’/‘bad’ in Figure 1.
Furthermore, cell images are grouped by the time that has passed since their seeding. There are four such groups, namely, 0–1 days, 2–3 days, 4 days, and >4 days. The distribution of cell images by cell lines and time after seeding is provided in Table 2.

2.1. Folder Structure

The dataset is structured to make it ready for training and validating machine learning models. Therefore, it is split into the three main subsets, train (~70% of the data), val (~10% of the data), and test (~20% of the data). The split of the data into these subsets is proportionally representative of the ‘good’/’bad’ classes, cell lines, and grouping by time. Furthermore, the subfolders within the above three main folders represent these divisions in the following way:
OOC_image_dataset
\- train
    \- good
        \- cell_type_A549
            \- 0–1_days
            \- 2–3_days
            \- 4_days
            \- 4+_days
        \- cell_type_CACO
           \- …
        \- cell_type_HPMEC
           \- … 
        \- cell_type_HSAEC
           \- …
        \- cell_type_HUVEC
            \- …
        \- cell_type_NHBE
            \- …
    \- bad
       \- cell_type_A549
      \- …
        \- cell_type_CACO
             \- …
        \- cell_type_HPMEC
             \- …
        \- cell_type_HSAEC
            \- …
        \- cell_type_HUVEC
            \- …
   \- cell_type_NHBE
      \- …
\- val
   \- good
      \- … 
    \- bad
       \- …
\- test
   \- good
       \- …
    \- bad
       \- …

2.2. Metadata

The metadata for each image consists of a class label (‘good’ or ‘bad’), the name of the cell line, and the time after seeding. As it follows from Section 2.1, these metadata for a particular image can be retrieved from its location in the folder structure; in addition to that, the metadata are also stored in the spreadsheet OOC_datasheet.xlsx, in which each row starts with image’s ID, which is the same as the file name (without file extension) of a respective image. It should be mentioned that the spreadsheet provides more precise time after seeding values than the ones that can be retrieved from the folder structure: for instance, for the images taken on the first day after seeding, the hour after seeding, when the image was taken is typically provided; for images taken later on, a specific day after seeding is given. Furthermore, for some of the cell samples, their seeding density (in cells per mL) is provided, and for some other cell samples, flow rate is indicated.

3. Methods

3.1. Cell Lines

All cell lines were kept at 37 °C in a humidified atmosphere containing 5% CO2.
A549 cell line were cultured in high glucose DMEM (41965062, Gibco, Carlsbad, CA, USA), supplemented with 10% FBS (F7524, Sigma Aldrich, Burlington, VT, USA) and 0.1% primocin (ant-pm-2, Invivogen, San Diego, CA, USA).
Caco-2 cell line was cultured in low glucose DMEM with sodium pyruvate (11880028, Gibco), supplemented with 10% FBS (F7524, Sigma Aldrich), 1% GlutaMax (A1286001, Thermo Fisher, Waltham, MA, USA) and 0.1% primocin (ant-pm-2, Invivogen).
HPMEC cells were cultured in EGM-2 Endothelial Cell Growth Medium-2 BulletKit, without addition of heparin solution (CC-3162, Lonza, Basel, Switzerland).
HUVEC cells were cultured in EGM-2 Endothelial Cell Growth Medium-2 BulletKit (CC-3162, Lonza).
NHBE cell line was cultured in PneumaCult Ex-Plus basal medium (05041, Stemcell Technologies, Vancouver, BC, Canada), supplemented with PneumaCult Ex-Plus 50Xsupplement, hydrocortisone solution (07925, StemCell), and 0.1% GA1000 (CC-4083, Lonza).
HSAEC cell line was cultured in Vascular cell basal medium (PCS-100-030, ATCC, Manassas, VA, USA), supplemented with Bronchial epithelial cell medium growth kit (PCS-300-040, ATCC) and 0.1% GA1000 (CC-4083, Lonza).

3.2. Microfluidic Device Development

The microfluidic chips used in this study were supplied by Cellbox Labs (Riga, Latvia) (Figure 2). These chips feature a vertically stacked design channel. Chips were manufactured using injection molding and were constructed from cyclic olefin copolymer (COC) with a porous track-etched polyester (PET) membrane. For the experiments, a membrane with 0.8 × 106 pores per cm2 was used, with each pore having a diameter of 3 µm and a thickness of 20 µm.

3.3. OOC Model Development

To obtain data, cells were cultivated in an OOC setup. For all chip experiments, cell culture media were equilibrated for 24 h at 37 °C in a humidified atmosphere containing 5% CO2 before use. Prior to cell seeding, chips were sterilized under a UV lamp within laminar flow hood, followed by 30 min 70% ethanol sterilization. Following the sterilization steps, chips were coated with 100 µg/mL Matrigel® solution and incubated at 37 °C for 30 min. Afterwards, the chip membranes was rinsed with respective cell media and cells were seeded. For cell seeding procedure, different protocols were used for each cell line.
To create a gut-on-a-chip model, Caco-2 and HUVEC cell lines were used to mimic epithelial and endothelial cell layers. The bottom membrane was seeded with HUVECs (9 × 106 cells/mL); afterwards the chip was inverted and incubated in a cell culture incubator for 2 h. After incubation, Caco-2 cell line was seeded in the top channel (2 × 106 cells/mL) and incubated at 37 °C for 2 h. Next, the chips were connected to a microfluidic setup and respective cell media was perfused in both channels, at flow rate 2 µL/min in top and 1.66 µL/min in bottom, respectively.
To develop lung cancer on a chip, A549 and HPMEC were used. Firstly, the HPMEC cell line was seeded in the bottom channel (6 × 106 cells/mL, 9 × 106 cells/mL), the chip was inverted and incubated in the cell culture incubator for 5 h. Afterwards, the chips were connected to a microfluidic setup and bottom channels were perfused with HPMEC media for 5 days with altering flow rate 2 µL/min, 1.66 µL/min, 1.71 µL/min or 1.74 µL/min. After 5 days of HPMEC culture, A549 cells were seeded in the top channel (1 × 106 cells/mL, 10 × 106 cells/mL, 9 × 106 cells/mL, 2.6 × 106 cells/mL) and incubated for 2 h in cell culture incubator. After cell seeding, chips were connected to microfluidic setup and flow of media was applied at flow rate 2 µL/min, 2.33 µL/min or 2.77 µL/min.
For lung on a chip modeling, two different epithelial cell lines were used. In case of NHBE, cells were seeded in the top channels at concentration of 2 × 106 cells/mL, 4 × 106 cells/mL, whilst HSAEC cells were seeded at concentration of 1 × 106 cells/mL, 1.5 × 106 cells/mL, 1.47 × 106 cells/mL, 6 × 106 cells/mL, 10 × 106 cells/mL and incubated for 2 h in cell culture incubator. Subsequently, chips were connected in microfluidic setup and respective cell media was perfused at flow rate 2 µL /min. After 5 days of culture, chips were switched to air–liquid interface (ALI) for 7 days and cultured in PneumaCult ALI Complete Base Medium (05001, Stemcell Technologies), supplemented with heparin solution (07980, Stemcell Technologies), hydrocortisone solution (07925, Stemcell Technologies) and 0.1% GA1000 (CC-4083, Lonza). Next, the HPMEC cell line was seeded in the bottom channel as previously described and cultivated in ALI for another 4 days. Data was generated only before ALI, since presence of the air within the epithelial cell channel can alter transparency and therefore the imaging process.
Different flow rates (ranging from 1 to 2.77 μL/min) and cell seeding density were used depending on cell type, since these are critical parameters for cell cultivation. A syringe pump (ISPLab02, Baoding Shenchen Precision Pump Co., Ltd., Baoding, China) was used to maintain laminar media flow within the channels in an infusion regimen. Each cell type parameter such as seeding density, flow rate and time after cell seeding, was summarized together with respective images for each channel in one dataset.

3.4. Data Generation

For imaging, an automated brightfield microscopy setup developed by Cellbox Labs was used. This setup consists of a high-resolution camera, IM Compact (IC10-05q32MU3101) from Opto GmbH, and precise control of the chip’s movement which was achieved using a precision XYZ motion system from Zaber Technologies systems. This motion system consisted of three units that enabled versatile manipulation of the chip holder, allowing movement between channels, within a channel, and adjustments in the chip holder’s height along the Z-axis. Live imaging and the subsequent acquisition of images were carried out using the OptoViewer software (Opto Viewer Version: 2.0.0.2359) by Opto GmbH, while the movement of the XYZ stage was controlled using the Zaber Launcher software (Zaber Launcher 1.6.11) provided by Zaber Technologies systems. For each cell type, parameters such as seeding density, flow rate and the time after cell seeding were consolidated with corresponding images for each channel into a dataset. A “good” or a “bad” class label was assigned to each image by an expert in cell biology.

4. Application Example

To demonstrate how the organ-on-a-chip dataset can be used for training ML models, we conducted an experiment with MobileNetV3 [14]. MobileNetV3 is a DNN classifier, which is primarily meant for the use on mobile and edge devices, as it is lightweight and fast; however, despite its small size, it has demonstrated a remarkably good performance on benchmark datasets. To leverage the advantages of transfer learning, we used an implementation of this DNN pretrained on the ImageNet dataset [15] available in the Keras library. To accommodate the needs of the experiment, the foundational model was modified in the following way:
  • The input layer was changed from 224 × 224 to 600 × 600 pixels to capture the morphology of input images in greater detail;
  • After the input layer, we added the data augmentation layers that would apply random rotations, flips and contrast adjustments to the input images during the training to augment the training data and decrease the risk of overfitting;
  • The top layers were extended with a GlobalAveragePooling2D layer, a BatchNormalization layer with a subsequent dropout rate of 0.2, and a final dense output layer with a single neuron activated by the sigmoid function to implement binary (‘good’ vs. ‘’bad’) classification.
The summary of the model is provided in Figure 3.
The DNN model was trained on the central crops of the images with the crop size of 600 × 600 pixels, which matched the size of the input layer of the model; furthermore, cropping images eliminated the difference in the image size in the dataset. The training procedure consisted of two stages. During the first stage, only the top layers of the model were trained for 30 epochs using the Adam optimizer with a learning rate of 0.0001 and a decay rate of 0.0001. During the second stage, the last 15 layers of the model were unfrozen, and the model was trained for 170 more epochs with a decreased (to prevent overfitting) learning rate of 0.00001. For the second training stage, we included an early stopping callback, which would interrupt the training and save the best model if there was no improvement in the model accuracy on the validation dataset for 30 epochs. After the training, the model was evaluated on the test dataset. The loss and the accuracy of the model during the experiment are shown in Figure 4A and 4B, respectively. Evaluation of the model on the test dataset yielded the accuracy of binary classification of 0.81, with precision = 0.79 and recall = 0.78.
The confusion matrix, illustrating the distribution of predicted classes against the true classes, is presented in Figure 4C. To assess the achieved accuracy, we compare it to that of a naive classifier that would assign all instances to the largest class in the dataset; as the accuracy of the naive classifier would be 0.56, we conclude that the accuracy achieved with our model is substantially higher, which demonstrates that ML classifiers can be successfully trained on the organ-on-a-chip dataset.
A possible approach to improving the performance of such classifiers on the present dataset in the future is to adjust their hyperparameters (e.g., the number of layers on the top of the foundational model and the number of neurons in each such layer) further, as we implemented only minor modifications in the foundational model in the present work. Furthermore, it may be expedient to use the means of dealing with the uneven class distribution in the dataset (e.g., using a weight for each class when training the classifier) and the non-uniform image intensity (e.g., by normalizing them), as these features of the data are known to have an impact on the performance of DNN models. The subjectivity introduced by human raters will inevitably lead to variability in image classification. Although not avoidable, this variability can be mitigated by increasing the dataset and implementing intercalibration between raters, as well as by setting up and agreeing upon guidelines for image classification. Variability could also be reduced by enhancing image quality either by avoiding technical artifacts or recognizing them before even starting cell cultivation, for example, by performing microscopy imaging of empty chips.

5. Patents

Cellbox Labs chip designs, manufacturing materials, and fabrication and bonding technology are patent-pending, and have been filed as European Patent Office (EP) and Patent Cooperation Treaty (PCT) applications EP4198119A1 and WO2023111127A1, respectively. Subsequently, accurate details of the fabrication and bonding parameters cannot be disclosed until the granting of the patent.

Author Contributions

Conceptualization, R.K., L.L., M.I. and A.A.; methodology, V.M., A.S., K.N., F.R., R.R., K.G.Z., R.K., M.I., L.L. and A.Z.; software, K.G.Z., M.I. and A.Z.; validation, K.G.Z., M.I., A.Z., A.S. and V.M.; investigation, V.M., A.S., K.G.Z., M.I. and A.Z.; data curation, K.G.Z., M.I. and A.Z.; writing—original draft preparation, V.M., A.S, M.I. and L.L.; writing—review and editing, V.M., R.R., G.M., M.I. and A.A.; supervision, A.A. and R.K.; project administration, R.K., T.L., R.R., G.M. and A.A.; funding acquisition, R.K., R.R., G.M. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Regional Development Fund (ERAF), grant number 1.1.1.1/21/A/079.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available at https://doi.org/10.5281/zenodo.10203721, accessed on 24 November 2023.

Conflicts of Interest

A.A., G.M. and R.R. are founders, board members, and equity holders in Cellbox Labs, LLC. The remaining authors do not declare any conflict of interest.

References

  1. Koyilot, M.C.; Natarajan, P.; Hunt, C.R.; Sivarajkumar, S.; Roy, R.; Joglekar, S.; Pandita, S.; Tong, C.W.; Marakkar, S.; Subramanian, L.; et al. Breakthroughs and Applications of Organ-on-a-Chip Technology. Cells 2022, 11, 1828. [Google Scholar] [CrossRef] [PubMed]
  2. Leung, C.M.; De Haan, P.; Ronaldson-Bouchard, K.; Kim, G.-A.; Ko, J.; Rho, H.S.; Chen, Z.; Habibovic, P.; Jeon, N.L.; Takayama, S.; et al. A Guide to the Organ-on-a-Chip. Nat. Rev. Methods Primers 2022, 2, 33. [Google Scholar] [CrossRef]
  3. Shin, W.; Hinojosa, C.D.; Ingber, D.E.; Kim, H.J. Human Intestinal Morphogenesis Controlled by Transepithelial Morphogen Gradient and Flow-Dependent Physical Cues in a Microengineered Gut-on-a-Chip. iScience 2019, 15, 391–406. [Google Scholar] [CrossRef] [PubMed]
  4. Wong, T.-Y.; Chang, S.-N.; Jhong, R.-C.; Tseng, C.-J.; Sun, G.-C.; Cheng, P.-W. Closer to Nature Through Dynamic Culture Systems. Cells 2019, 8, 942. [Google Scholar] [CrossRef] [PubMed]
  5. Cavarzerani, E.; Caligiuri, I.; Bartoletti, M.; Canzonieri, V.; Rizzolio, F. 3D Dynamic Cultures of HGSOC Organoids to Model Innovative and Standard Therapies. Front. Bioeng. Biotechnol. 2023, 11, 1135374. [Google Scholar] [CrossRef] [PubMed]
  6. Ewart, L.; Apostolou, A.; Briggs, S.A.; Carman, C.V.; Chaff, J.T.; Heng, A.R.; Jadalannagari, S.; Janardhanan, J.; Jang, K.-J.; Joshipura, S.R.; et al. Performance Assessment and Economic Analysis of a Human Liver-Chip for Predictive Toxicology. Commun. Med. 2022, 2, 154. [Google Scholar] [CrossRef] [PubMed]
  7. Cauli, E.; Polidoro, M.A.; Marzorati, S.; Bernardi, C.; Rasponi, M.; Lleo, A. Cancer-on-Chip: A 3D Model for the Study of the Tumor Microenvironment. J. Biol. Eng. 2023, 17, 53. [Google Scholar] [CrossRef] [PubMed]
  8. Liu, X.; Fang, J.; Huang, S.; Wu, X.; Xie, X.; Wang, J.; Liu, F.; Zhang, M.; Peng, Z.; Hu, N. Tumor-on-a-Chip: From Bioinspired Design to Biomedical Application. Microsyst. Nanoeng. 2021, 7, 50. [Google Scholar] [CrossRef] [PubMed]
  9. Zhou, H.; Weir, M.D.; Xu, H.H.K. Effect of Cell Seeding Density on Proliferation and Osteodifferentiation of Umbilical Cord Stem Cells on Calcium Phosphate Cement-Fiber Scaffold. Tissue Eng. Part. A 2011, 17, 2603–2613. [Google Scholar] [CrossRef] [PubMed]
  10. Morales, I.A.; Boghdady, C.-M.; Campbell, B.E.; Moraes, C. Integrating Mechanical Sensor Readouts into Organ-on-a-Chip Platforms. Front. Bioeng. Biotechnol. 2022, 10, 1060895. [Google Scholar] [CrossRef] [PubMed]
  11. Basu, K.; Sinha, R.; Ong, A.; Basu, T. Artificial Intelligence: How Is It Changing Medical Sciences and Its Future? Indian. J. Dermatol. 2020, 65, 365–370. [Google Scholar] [CrossRef] [PubMed]
  12. Li, J.; Chen, J.; Bai, H.; Wang, H.; Hao, S.; Ding, Y.; Peng, B.; Zhang, J.; Li, L.; Huang, W. An Overview of Organs-on-Chips Based on Deep Learning. Research 2022, 2022, 9869518. [Google Scholar] [CrossRef] [PubMed]
  13. Suganyadevi, S.; Seethalakshmi, V.; Balasamy, K. A Review on Deep Learning in Medical Image Analysis. Int. J. Multimed. Inf. Retr. 2022, 11, 19–38. [Google Scholar] [CrossRef] [PubMed]
  14. Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.-C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  15. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
Figure 1. Representative images of A549 and HPMEC cell lines from lung cancer on a chip model classified with ‘good’/’bad’ label. Scale bar: 100 μm.
Figure 1. Representative images of A549 and HPMEC cell lines from lung cancer on a chip model classified with ‘good’/’bad’ label. Scale bar: 100 μm.
Data 09 00028 g001
Figure 2. Image of microfluidic chip. Chip dimensions are 4.9 cm × 3 cm and total height 6 mm. Blue dye represents the bottom channel, red dye—top channel.
Figure 2. Image of microfluidic chip. Chip dimensions are 4.9 cm × 3 cm and total height 6 mm. Blue dye represents the bottom channel, red dye—top channel.
Data 09 00028 g002
Figure 3. Summary of DNN architecture.
Figure 3. Summary of DNN architecture.
Data 09 00028 g003
Figure 4. Summary of training MobileNetV3 on the dataset: (A) model loss during training, (B) model accuracy during training, and (C) confusion matrix for the performance of the model on the test data.
Figure 4. Summary of training MobileNetV3 on the dataset: (A) model loss during training, (B) model accuracy during training, and (C) confusion matrix for the performance of the model on the test data.
Data 09 00028 g004
Table 1. Distribution of cells by cell lines and ‘good’/’bad’ classes.
Table 1. Distribution of cells by cell lines and ‘good’/’bad’ classes.
Cell Line‘Good’‘Bad’
A549537238
Caco-2109237
HPMEC798664
HUVEC1592
NHBE33105
HSAEC16381
Table 2. Distribution of cells by cell lines and time after seeding.
Table 2. Distribution of cells by cell lines and time after seeding.
Cell Line0–1 Days2–3 Days4 Days>4 Days
A5492356783390
Caco-23611711182
HPMEC463452109438
HUVEC1652039
NHBE1644969
HSAEC86502781
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Movčana, V.; Strods, A.; Narbute, K.; Rūmnieks, F.; Rimša, R.; Mozoļevskis, G.; Ivanovs, M.; Kadiķis, R.; Zviedris, K.G.; Leja, L.; et al. Organ-On-A-Chip (OOC) Image Dataset for Machine Learning and Tissue Model Evaluation. Data 2024, 9, 28. https://doi.org/10.3390/data9020028

AMA Style

Movčana V, Strods A, Narbute K, Rūmnieks F, Rimša R, Mozoļevskis G, Ivanovs M, Kadiķis R, Zviedris KG, Leja L, et al. Organ-On-A-Chip (OOC) Image Dataset for Machine Learning and Tissue Model Evaluation. Data. 2024; 9(2):28. https://doi.org/10.3390/data9020028

Chicago/Turabian Style

Movčana, Valērija, Arnis Strods, Karīna Narbute, Fēlikss Rūmnieks, Roberts Rimša, Gatis Mozoļevskis, Maksims Ivanovs, Roberts Kadiķis, Kārlis Gustavs Zviedris, Laura Leja, and et al. 2024. "Organ-On-A-Chip (OOC) Image Dataset for Machine Learning and Tissue Model Evaluation" Data 9, no. 2: 28. https://doi.org/10.3390/data9020028

Article Metrics

Back to TopTop