Next Article in Journal
Building Reliable Massive Capacity SSDs through a Flash Aware RAID-Like Protection
Next Article in Special Issue
Special Issue on Image Processing Techniques for Biomedical Applications
Previous Article in Journal
Insights from Learning Analytics for Hands-On Cloud Computing Labs in AWS
Previous Article in Special Issue
An Empirical Evaluation of Nuclei Segmentation from H&E Images in a Real Application Scenario
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Systematic Quantification of Cell Confluence in Human Normal Oral Fibroblasts

1
Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei 11221, Taiwan
2
Division of Radiation Oncology, Taipei City Hospital RenAi Branch, Taipei 106, Taiwan
3
Institute of Neuroscience, National Chengchi University, Taipei 116, Taiwan
4
Department of Chinese medicine, En Chu Kong hospital, Taipei 237, Taiwan
5
Institute of Oral Biology, School of Dentistry National Yang-Ming University, Taipei 11221, Taiwan
6
Cancer Progression Research Center, National Yang-Ming University, Taipei 11221, Taiwan
7
Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 404, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally.
Appl. Sci. 2020, 10(24), 9146; https://doi.org/10.3390/app10249146
Submission received: 17 September 2020 / Revised: 16 November 2020 / Accepted: 20 November 2020 / Published: 21 December 2020
(This article belongs to the Special Issue Image Processing Techniques for Biomedical Applications)

Abstract

:
Background: The accurate determination of cell confluence is a critical step for generating reasonable results of designed experiments in cell biological studies. However, the cell confluence of the same culture may be diversely predicted by individual researchers. Herein, we designed a systematic quantification scheme implemented on the Matlab platform, the so-called “Confluence-Viewer” program, to assist cell biologists to better determine the cell confluence. Methods: Human normal oral fibroblasts (hOFs) seeded in 10 cm culture dishes were visualized under an inverted microscope for the acquisition of cell images. The images were subjected to the cell segmentation algorithm with top-hat transformation and the Otsu thresholding technique. A regression model was built using a quadratic model and shape-preserving piecewise cubic model. Results: The cell segmentation algorithm generated a regression curve that was highly correlated with the cell confluence determined by experienced researchers. However, the correlation was low when compared to the cell confluence determined by novice students. Interestingly, the cell confluence determined by experienced researchers became more diverse when they checked the same images without a time limitation (up to 1 min). Conclusion: This tool could prevent unnecessary human-made mistakes and meaningless repeats for novice researchers working on cell-based studies in health care or cancer research.

1. Introduction

Cell confluence is defined as the percentage of a culture dish or a flask occupied by any type of adherent mammalian cells. It is an important parameter used to determine the phase (lag, log, and plateau) of cell growth in order to correctly conduct further experiments in cell culture biology [1]. Although this is a fundamental step in cell biological research, the successful determination of precise confluence is largely dependent on the experience and intuition of researchers, especially when different types of adherent cells are assigned for culturing by novices who may find it difficult to confirm the confluence of cells with various cell shapes. Experiments usually fail or become unrepeatable because of mistaken confluence. Experienced researchers may also easily overlook this point when they educate novices. To avoid unnecessary trial-and-error consumption and wasted time for novices in cell biology labs, it is important to design a convenient tool to aid the progression of experiments.
Computer-aided technology has been widely used for solving the bias and individual uncertainty of human beings [2]. Recently, artificial intelligence (AI) has become the most critical tool in various fields, and it provides the precise and rapid prediction and identification of input problems to assist humans in making better decisions in complex and large datasets [3]. In biomedical applications, the identification of cancer lesions in pathological imaging and 2D/3D medical imaging is the most important application of AI [4,5,6,7,8,9,10,11]. Accumulated studies in the literature have demonstrated that the diagnostic accuracy of AI is becoming loser to that of well trained physicians and technologists [12]. High-throughput diagnosis by AI is also another important advantage in clinical applications [13]. In cell biological studies, many cell images are also generated during various experiments, and it is of interest for specific algorithms to be developed to assist scientists’ research works.
Computer-aided technology such as computer-aided diagnosis (CAD) has been researched fruitfully. Accurate artery boundary detection was proposed to access the artery wall movement during a heart cycle [14,15] for the purpose of measuring artery compliance. A computer-aided detection system was proposed to access the mandibular cannel boundary on tooth X-ray panoramas to measure the maximum possible distance on dental implants for preventing nerve injury in the mandibular cannel [16]. A computer-aided analysis method was proposed to access human intracranial compliance using Magnetic Resonance Imaging (MRI) non-invasively [17]. In tumor therapy planning, a computer-aided method was proposed for organ and tumor contouring on computed tomography (CT) images using an interactive method [18]. All these technologies showed that computer-assisted schemes could be very helpful in making quantifications in a more objective way than human beings. At least several hundreds of cell types exist in the human body. Mammalian cells can be cultured in vitro, and the growth of cells is usually quantified by hemocytometry [19]. Given the time-consuming nature of laboratory operations, it is also of interest to design an automated algorithm to directly predict the cell number and growth from microscopic images. Such an algorithm would be important for standardizing experiments and for measuring assay impact in cell biological research.
There are some commercial products that perform automated cell quantification, such as TC20TM (Bio-Rad Laboratories, Inc., Hercules, CA, USA) [20]. There are two research aspects of cell quantification: (1) static cell number quantification (also phrased as cell segmentation) and (2) dynamic cell tracking and quantification. The former uses a single image to quantify the cell number [21,22], whereas the latter may use many images taken from different time periods to access the cell dynamics and behavior [23]. Some state-of-the-art cell segmentation papers and codes are collected in [24]. This study belongs to the former type: cell segmentation. However, we do not count every single cell but quantify them as a percentage.
In this study, a systematic quantification scheme was proposed, and the “Confluence-Viewer” (CV) program was used to assess the percentage of cell confluence, which is usually mis-determined by novice scientists and even by experienced ones when culturing a new cell type. We compared the cell confluence levels determined by novice students, experienced cell biologists, and the “Confluence-Viewer” program. This revealed that the program-predicted cell confluence is closer to that determined by experienced scientists than to that determined by novices. As cell confluence is the most basic but important piece of information for successful experiments, this program is expected to assist cell biologists in preventing unnecessary mistakes during experiments.

2. Materials and Methods

Cell line—human normal oral fibroblasts (hOFs) were cultured in vitro from oral gingival tissues isolated from adult donors undergoing third molar extraction, under the approval of the Institutional Review Boards (IRB) of National Taiwan University Hospital (IRB approval number: 203106005RINC), Taiwan. In brief, clinical specimens were processed in sterile conditions using fine forceps to separate the epithelium and fibroblastic tissues. Isolated fibroblastic tissues were covered with a coverslip, and we then waited for cell migration out of the tissues. Once the cells had moved out of the tissues, the fibroblastic tissues were removed and the cells were trypsinized and cultured in Dulbecco’s modified eagle’s medium (DMEM, Life Technologies Co. Grand Island, NY, USA) containing 10% fetal bovine serum (FBS), 2 mM L-glutamate, and 50 U/mL penicillin (Sigma-Aldrich, St. Louis, MO, USA). Cells were incubated in a humidified incubator with 5% CO2 at 37 °C and passaged every two days. After seeding, cell images were acquired under an inverted microscope (Olympus CKX41, Shinjuku-ku, Tokyo, Japan) with a digital camera (COOLPIX P5000, NIKON CORP., Minato-ku, Tokyo, Japan) every 6 h for a total of 60 h. Five regions of each dish (rightmost, leftmost, upper, lower, and middle positions) were selected for image acquisition. The images were taken with the following parameters: image spatial resolution 3648 × 2736 (72 dpi in both the x and y directions), ISO 800 or 1600, aperture f/5.3, focal length 26.3 mm, and without flash. Figure 1 demonstrates ten typical cell confluence images.
Cell segmentation algorithm—there are many image processing techniques, and combinations of some of these techniques can produce thousands of tools. Top-hat transform is a morphological operation used in many image processing tasks, such as feature extraction, background equalization, and image enhancement. The top-hat transform of image f is given by
T ( f ) = f f o s
where o denotes the opening operation and s is the structural element. The structural element has a disk shape with radius 4. After the top-hat transform, the background is equalized and enhanced as shown below. Thereafter, it is easily clustered into two categories: foreground (the cells) and background. The famous Otsu thresholding technique can then be applied to find an optimal threshold value. The Otsu thresholding technique exhaustively searches for the threshold that minimizes the within-class variance, defined as a weighted sum of variances of the two classes:
σ w 2 ( t ) = p 1 ( t ) σ 1 2 ( t ) + p 2 ( t ) σ 2 2 ( t )
where p1 and p2 are the probabilities of classes 1 and 2 separated by the threshold t, 1 < t < 255 and t is an integer; and σ 1 2 and σ 2 2 are the variances of these two classes. The probabilities p1 and p2 are computed from the 256-bins of the gray-level image histogram: p 1 ( t ) = i = 0 t 1 h ( i ) and p 2 ( t ) = i = t 255 h ( i ) , where h ( i ) is the relative histogram of the gray-level i. Notably, p 1 + p 2 = 1 .
For two classes as in this study, minimizing the within-class variance is equivalent to maximizing the between-class variance since σ 2 = σ w 2 + σ b 2 , where σ 2 is the image total variance. The between-class variance is:
σ b 2 = p 1   ( μ 1 μ ) 2 + p 2   ( μ 2 μ ) 2
For simplification, we remove the threshold parameter t. Since we know the two relationships.
p 1 + p 2 = 1 and μ = p 1 μ 1 + p 2 μ 2 , the right-hand term in Equation (3) becomes p 2   ( μ 2 μ ) 2 = p 2   ( μ p 1 μ 1 p 2 μ ) 2 = p 1 2 ( μ μ 1 ) 2 p 2 = p 1 2 ( μ μ 1 ) 2 1 p 1 . Then, σ b 2 = p 1   ( μ 1 μ ) 2 + p 1 2 ( μ μ 1 ) 2 1 p 1 = p 1 1 p 1 ( μ μ 1 ) 2 . Thus, we obtain a simplified equation to calculate the between-class variance σ b 2 = p 1 1 p 1 ( μ μ 1 ) 2 by using only μ 1 , p 1 , and μ . The threshold value t is obtained by maximizing the between-class variance σ b 2 .
After the threshold value is determined, the cell confluence image is then binarized, the area above the threshold value is calculated, and the area percentage is saved. We obtained three images for each percentage category for linear regression. The total number of images used for regression analysis was 30. Every image was visually controlled by an expert to be categorized by percentage.
The requirements of using this program are the following:
(1)
The illumination setting of the digital camera should be consistent;
(2)
The camera parameters such as the focal length and zoom factor should be fixed.
The “Confluence-Viewer” Matlab source code is open in the “Matlab file exchange” website. The link is as in [25]. Some cell images for constructing the regression model are downloadable from [26].
Statistical analysis—Pearson’s correlation was used to measure the linear correlation between two variables. Pearson’s r (Pearson correlation coefficient) is a value between +1 and −1. The correlation and statistical significance were analyzed using the IBM SPSS Statistic software (ver. 27, Armonk, New York, USA).
Regression models—two models were used for curve fitting: the quadratic model and the shape-preserved cubic interpolation model. A quadratic model assumes that the data points form a parabolic curve, i.e., y = a 1 x 2 + a 2 x + a 3 . Three points can determine a parabolic curve. However, when the data consist of more than three points, it is an overdetermined problem. We can use the least square error method to minimize the total error and analytically determine the three parameters a 1 ,   a 2 ,   and   a 3 . This quadratic model can be constructed using the Matlab function “polyfit”. The shape-preserving cubic interpolation is a similar extension and can be constructed using the Matlab function “interp1” with the parameter “pchip”.

3. Results

3.1. Acquisition and Processing of Cell Culture Images

Firstly, we cultured hOFs by seeding a regular cell number (5 × 106) in a 10 cm dish, and cell images were acquired using an optical microscope with a digital camera every 6 h. The typical cell images were divided into 10 incremented cell confluences, which represented the cells occupying the cultured area denoted as a percentage (Figure 1). Herein, we demonstrate an easy-to-implement scheme that could reach our goal. The flowchart of our scheme is shown in Figure 2. This flowchart describes the scheme using one of the ten datasets. The raw image was recorded in color. For the algorithmic measurement of the cell confluence, the digital images were converted into gray mode using only the red channel. Since the high-resolution image might contain noise, the image was diminished to only 6% of the raw size. This is helpful for reducing the computational cost without affecting the accuracy of the final result. The uneven illumination problem (as shown in each top-left subfigure of Figure 3) was alleviated using the top-hat transformation (as shown in each middle subfigure of Figure 3), followed by threshold segmentation to generate binary images for determining the percentage of cell confluence. Transformed binary images of different cell confluence levels were obtained (as shown in each bottom-right subfigure of Figure 3). Two regression models were tested, one being the quadratic model and the other being the shape-preserved cubic interpolation model. The abscissa of Figure 4 denotes the cell confluence area percentage calculated by our scheme. The ordinate is the percentage visually determined by an expert. The circle points denote the 30 image data points. The red curve presents the quadratic model and the black curve presents interpolation using the shape-preserving piecewise cubic model, as described in Section 2. In this way, the regression model was built for the following tests as shown in Figure 4.

3.2. Determination of Cell Confluence

The different confluence levels shown in the cell photos were quantified by the algorithm (see Materials and Methods), experienced researchers, and novice students. Photos of the cells marked from 1 to 10 represent the increased cell number, and they were used for the prediction of cell confluence by humans or the algorithm. Each photo was examined by six experienced researchers and six novice students. The results showed that the best regression of mean cell confluence was performed using the algorithm prediction (Figure 5A). The mean cell confluence predicted by experienced researchers also generated a better regression than that predicted by novice students (Figure 5B,C). The time for humans to determine the cell confluence for each photo was limited to 1 min. Interestingly, the regression decreased when no time limitation was set for experienced researchers to determine the cell confluence (Figure 5D).

3.3. The Correlation of Cell Confluence Predicted by Novice Students and Experienced Researchers

We used Spearman’s rank correlation analysis to compare the correlations of cell confluence levels determined by novice students with those determined by experienced researchers and the algorithm. The results show that little correlation was found between novice students and experienced researchers in terms of determined cell confluence by visualizing cell images with or without time limitation (Figure 6A,B). The correlation between the novice- and algorithm-determined cell confluence was also low (Figure 6C). The 95% limit of agreement was then determined using the Bland–Altman analysis. Novices versus experienced researchers with or without time limitation for confluence determination and versus the algorithm are shown in Figure 6D–F, respectively. The results suggest that the cell confluence levels determined by novices and experienced researchers or the algorithm have low consistency.

3.4. The Correlation of Cell Confluence Predicted by Experienced Researchers and the Algorithm

We then examined the correlation of cell confluence levels predicted by experienced researchers and the Confluence-Viewer program. Interestingly, the results showed a good correlation between the algorithm- and experienced-researcher-predicted cell confluence with or without time limitation (Figure 7A,B). Although a longer determination time for experienced researchers decreased their prediction accuracy, a high correlation remained in these two conditions (Figure 7C). Thus, the algorithm-determined cell confluence is comparable to experienced researchers’ predictions. Again, the Bland–Altman analysis was used to determine the 95% limit of agreement. The confluence determination results of experienced researchers with or without time limitation versus the algorithm are shown in Figure 7D,E. Additionally, the cell confluence determined by experienced researchers showed little bias with or without time limitation (Figure 7F). From Figure 7E, we can see that the difference between results from experienced researchers (without time limitations) and the algorithm is very limited. The standard deviation of the difference is the smallest among all comparisons (Figure 6D–F and Figure 7D–F). This result reveals that the proposed software system is reliable and has the potential to replace experienced researchers’ work in cell confluence quantifications.

4. Discussion

The measurement of cell growth is the most fundamental technique to be learned by new students who are interested in working in molecular and cell biology labs. Except for routine passages, adherent cell cultures are usually treated with experimental agents before they are trypsinized for counting cell numbers. Therefore, the adequate and accurate determination of cell confluence can greatly influence the results of experiments. For example, different levels of cell confluence represent specific phases of growth curves, which may differentially respond to the same agents [27]. An inaccurate determination of cell confluence would lead to poor experimental results and meaningless repeats and troubleshooting. Our analysis demonstrated that inexperienced students made more diverse decisions on cell confluence than did experienced researchers and the algorithm. A closer correlation was also observed between experienced researchers and the algorithm. In a busy lab, experienced researchers may overlook the guidelines on confluence determination for novice students for various reasons. Therefore, our algorithm would save time for experienced researchers and prevent consumable supplies being wasted due to mistaken confluence causing confused data.
Busschots et al. proposed a non-invasive, non-destructive, and label-free method using Image J freeware bundled area fraction (AF) output analysis [28]. In our method, we proposed a scheme and designed a program on the Matlab platform by directly measuring the cell shapes and unoccupied space using hundreds of cell imaging files for testing. In this method, we built a regression model based on morphologic operations, an image enhancement method, and a quadratic regression. This algorithm could precisely recognize similar types of cells with distinct morphology and confluence. In previous work, the free software Image J was used to adjust parameters, and all operations had to be performed manually. For example, “image analysis” included eight steps that had to be operated non-automatically. This may be not suitable for processing sets of images. In this study, we were able to write the source codes with image processing techniques. This algorithm and its code are fully automatic for processing files by batch. We are able to modify the program if the images have different situations such as different illuminations and focal lengths, and even if the cell morphology is different from the shape of the fibroblasts. Our scheme established a robust and user-friendly code that has better potential for multiple applications.
We used human oral fibroblasts in the current study. As the primary cells will enter replicative senescence, it may be possible to use this program to assess the growth of oral cells and their responses to compounds against oral ulcers. We also expect to execute the high-throughput analysis of malignant cells treated with therapeutic agents to determine the efficacy of drugs using this Confluence-Viewer program.

5. Conclusions

In summary, we provided a novel non-invasive and non-destructive algorithm to assist inexperienced students and even novice cell biologists in making consistent, systematic, and fast quantifications of cell confluence. This method was also validated using predictions of cell confluence from experienced researchers, which were highly correlated with the results of the algorithm. Our source codes are open, and we expect that this program will be widely used or even modifiable by cell biologists working in various research fields for more efficient experimental progression.

Author Contributions

Conceptualization, Y.-J.L., J.-D.L. and D.-C.C.; investigation, C.-H.C., T.-T.L. and P.-H.S.; software, D.-C.C.; validation, Y.-J.L. and D.-C.C.; formal analysis, C.-H.C., Y.-J.L. and D.-C.C.; resources, W.-C.L.; data curation, D.-C.C.; writing—original draft preparation, Y.-J.L. and D.-C.C.; writing—review and editing, Y.-J.L. and D.-C.C.; supervision, Y.-J.L.; funding acquisition, J.-D.L., and Y.-J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology of Taiwan (108-2314-B-010-016-) and an intramural grant from Taipei City Hospital, RenAi Branch (TPCH-108-23). The article processing fee was partially supported by China Medical University under grant number: CMU109-ASIA-02.

Acknowledgments

We thank the Cancer Progression Research Center, National Yang-Ming University, from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taipei, Taiwan. We also thank the volunteers, including Min-Ying Lin, Pin-Ho Lo, Chia-Yun Kang, Hsueh-Yen Yu, Yu-Lou Wu, and Hua-An Tso, who provided their opinions on the cell images.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iloki Assanga, S.B.; Gil-Salido, A.A.; Lewis Luján, L.M.; Rosas-Durazo, A.; Acosta-Silva, A.L.; Rivera-Castañeda, E.G.; Rubio-Pino, J.L. Cell growth curves for different cell lines and their relationship with biological activities. Int. J. Biotechnol. Mol. Biol. Res. 2013, 4, 10. [Google Scholar] [CrossRef]
  2. Savage, N. Computer logic meets cell biology: How cell science is getting an upgrade. Nature 2018, 564, S1–S3. [Google Scholar] [CrossRef] [PubMed]
  3. Jones, D.T. Setting the standards for machine learning in biology. Nat. Rev. Mol. Cell Biol. 2019, 20, 659–660. [Google Scholar] [CrossRef] [PubMed]
  4. Browning, L.; Colling, R.; Rakha, E.; Rajpoot, N.; Rittscher, J.; James, J.A.; Salto-Tellez, M.; Snead, D.R.J.; Verrill, C. Digital pathology and artificial intelligence will be key to supporting clinical and academic cellular pathology through COVID-19 and future crises: The PathLAKE consortium perspective. J. Clin. Pathol. 2020. [Google Scholar] [CrossRef]
  5. Ibrahim, A.; Gamble, P.; Jaroensri, R.; Abdelsamea, M.M.; Mermel, C.H.; Chen, P.C.; Rakha, E.A. Artificial intelligence in digital breast pathology: Techniques and applications. Breast 2020, 49, 267–273. [Google Scholar] [CrossRef] [Green Version]
  6. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  7. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef]
  8. Chan, S.; Bailey, J.; Ros, P.R. Artificial Intelligence in Radiology: Summary of the AUR Academic Radiology and Industry Leaders Roundtable. Acad. Radiol. 2020, 27, 117–120. [Google Scholar] [CrossRef] [Green Version]
  9. Jha, S.; Cook, T. Artificial Intelligence in Radiology—The State of the Future. Acad. Radiol. 2020, 27, 1–2. [Google Scholar] [CrossRef] [Green Version]
  10. Weisberg, E.M.; Chu, L.C.; Park, S.; Yuille, A.L.; Kinzler, K.W.; Vogelstein, B.; Fishman, E.K. Deep lessons learned: Radiology, oncology, pathology, and computer science experts unite around artificial intelligence to strive for earlier pancreatic cancer diagnosis. Diagn. Interv. Imaging 2020, 101, 111–115. [Google Scholar] [CrossRef]
  11. Weikert, T.; Cyriac, J.; Yang, S.; Nesic, I.; Parmar, V.; Stieltjes, B. A Practical Guide to Artificial Intelligence-Based Image Analysis in Radiology. Investig. Radiol. 2020, 55, 1–7. [Google Scholar] [CrossRef] [PubMed]
  12. Colling, R.; Pitman, H.; Oien, K.; Rajpoot, N.; Macklin, P.; CM-Path AI in Histopathology Working Group; Snead, D.; Sackville, T.; Verrill, C. Artificial intelligence in digital pathology: A roadmap to routine use in clinical practice. J. Pathol. 2019, 249, 143–150. [Google Scholar] [CrossRef] [PubMed]
  13. Bray, M.A.; Carpenter, A.E. Quality Control for High-Throughput Imaging Experiments Using Machine Learning in Cellprofiler. Methods Mol. Biol. 2018, 1683, 89–112. [Google Scholar] [CrossRef] [PubMed]
  14. Cheng, D.C.; Wu, J.F.; Kao, Y.H.; Su, C.H.; Liu, S.H. Accurate Measurement of Cross-Sectional Area of Femoral Artery on MRI Sequences of Transcontinental Ultramarathon Runners Using Optimal Parameters Selection. J. Med. Syst. 2016, 40, 260. [Google Scholar] [CrossRef]
  15. Cheng, D.C.; Wu, J.F.; Kao, Y.H.; Su, C.H.; Liu, S.H. Elliptic Shape Prior Dynamic Programming for Accurate Vessel Segmentation in MRI Sequences with Automated Optimal Parameter Selection. J. Med. Biol. Eng. 2016, 2, 1–9. [Google Scholar] [CrossRef]
  16. Cheng, D.C.; Chen, L.W.; Shen, Y.W.; Fuh, L.J. Computer-assisted system on mandibular canal detection. Biomed. Tech. 2017, 62, 575–580. [Google Scholar] [CrossRef]
  17. Tsai, Y.H.; Chen, H.C.; Tung, H.; Wu, Y.Y.; Chen, H.M.; Pan, K.J.; Cheng, D.C.; Chen, J.H.; Chen, C.C.; Chai, J.W.; et al. Noninvasive assessment of intracranial elastance and pressure in spontaneous intracranial hypotension by MRI. J. Magn. Reson. Imaging 2018, 48, 1255–1263. [Google Scholar] [CrossRef]
  18. Cheng, D.C.; Chi, J.H.; Yang, S.N.; Liu, S.H. Organ Contouring for Lung Cancer Patients with a Seed Generation Scheme and Random Walks. Sensors 2020, 20, 4823. [Google Scholar] [CrossRef]
  19. Cadena-Herrera, D.; Esparza-De Lara, J.E.; Ramirez-Ibanez, N.D.; Lopez-Morales, C.A.; Perez, N.O.; Flores-Ortiz, L.F.; Medina-Rivero, E. Validation of three viable-cell counting methods: Manual, semi-automated, and automated. Biotechnol. Rep. 2015, 7, 9–16. [Google Scholar] [CrossRef] [Green Version]
  20. Vembadi, A.; Menachery, A.; Qasaimeh, M.A. Cell Cytometry: Review and Perspective on Biotechnological Advances. Front. Bioeng. Biotechnol. 2019, 7, 147. [Google Scholar] [CrossRef]
  21. Meijering, E. Cell Segmentation: 50 Years Down the Road. IEEE Signal Process. Mag. 2012, 29, 140–145. [Google Scholar] [CrossRef]
  22. Vicar, T.; Balvan, J.; Jaros, J.; Jug, F.; Kolar, R.; Masarik, M.; Gumulec, J. Cell segmentation methods for label-free contrast microscopy: Review and comprehensive comparison. BMC Bioinform. 2019, 20, 360. [Google Scholar] [CrossRef] [PubMed]
  23. Hilsenbeck, O.; Schwarzfischer, M.; Skylaki, S.; Schauberger, B.; Hoppe, P.S.; Loeffler, D.; Kokkaliaris, K.D.; Hastreiter, S.; Skylaki, E.; Filipczyk, A.; et al. Software tools for single-cell tracking and quantification of cellular and molecular properties. Nat. Biotechnol. 2016, 34, 703–706. [Google Scholar] [CrossRef]
  24. Available online: https://paperswithcode.com/task/cell-segmentation/codeless (accessed on 20 October 2020).
  25. Available online: https://www.mathworks.com/matlabcentral/fileexchange/82370-confluence-viewer (accessed on 20 October 2020).
  26. Available online: https://au.mathworks.com/matlabcentral/fileexchange/82375-confluence-viewer-cell-images-for-constructing-a-model (accessed on 20 October 2020).
  27. Wright Muelas, M.; Ortega, F.; Breitling, R.; Bendtsen, C.; Westerhoff, H.V. Rational cell culture optimization enhances experimental reproducibility in cancer cells. Sci. Rep. 2018, 8, 3029. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Busschots, S.; O’Toole, S.; O’Leary, J.J.; Stordal, B. Non-invasive and non-destructive measurements of confluence in cultured adherent cell lines. MethodsX 2015, 2, 8–13. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Typical cell images of different confluence levels, from 5.5 to 94.5%. These are the raw color images. The algorithm designed in this study was used to calculate the confluence percentages of these images.
Figure 1. Typical cell images of different confluence levels, from 5.5 to 94.5%. These are the raw color images. The algorithm designed in this study was used to calculate the confluence percentages of these images.
Applsci 10 09146 g001
Figure 2. A flowchart of our scheme. There were 10 datasets representing different confluence percentages, from 10 to 100%. This flowchart demonstrates the processing of one dataset.
Figure 2. A flowchart of our scheme. There were 10 datasets representing different confluence percentages, from 10 to 100%. This flowchart demonstrates the processing of one dataset.
Applsci 10 09146 g002
Figure 3. A demonstration of image processing by top hat transformation and thresholding segmentation for different cell confluence levels.
Figure 3. A demonstration of image processing by top hat transformation and thresholding segmentation for different cell confluence levels.
Applsci 10 09146 g003
Figure 4. Regression models of the training data. There were 30 cell confluence images. The circle points represent the 30 data points. The red curve is the quadratic model and the black curve is the shape-preserving cubic interpolation.
Figure 4. Regression models of the training data. There were 30 cell confluence images. The circle points represent the 30 data points. The red curve is the quadratic model and the black curve is the shape-preserving cubic interpolation.
Applsci 10 09146 g004
Figure 5. The measurement of cell confluence performed using various approaches. Each datum point represents a mean of the cell confluence determined by (A) the algorithm; (B) novice students (n = 6); (C) experienced researchers (n = 6) with a time limitation (1 min); or (D) experienced researchers without a time limitation. Cell photos were marked from 1 to 10, representing the increased cell number in the cell dishes. Five photos of each confluence condition were used for the evaluation.
Figure 5. The measurement of cell confluence performed using various approaches. Each datum point represents a mean of the cell confluence determined by (A) the algorithm; (B) novice students (n = 6); (C) experienced researchers (n = 6) with a time limitation (1 min); or (D) experienced researchers without a time limitation. Cell photos were marked from 1 to 10, representing the increased cell number in the cell dishes. Five photos of each confluence condition were used for the evaluation.
Applsci 10 09146 g005
Figure 6. The correlations of the cell confluence levels measured by novice students and by (A) experienced researchers with a time limitation (examined the images in 1 min); (B) experienced researchers without a time limitation; and (C) the algorithm. Bland–Altman analysis for determining the 95% limit of agreement between (D) novices and experienced researchers with a time limitation (examined the images in 1 min); (E) novices and experienced researchers without a time limitation; and (F) novices and the algorithm.
Figure 6. The correlations of the cell confluence levels measured by novice students and by (A) experienced researchers with a time limitation (examined the images in 1 min); (B) experienced researchers without a time limitation; and (C) the algorithm. Bland–Altman analysis for determining the 95% limit of agreement between (D) novices and experienced researchers with a time limitation (examined the images in 1 min); (E) novices and experienced researchers without a time limitation; and (F) novices and the algorithm.
Applsci 10 09146 g006
Figure 7. The correlation of the algorithm-determined cell confluence and that determined by (A) experienced researchers with time limitation (examined the images in 1 min) and (B) experienced researchers without time limitation. (C) The correlation of the cell confluence levels was measured by experienced researchers with or without time limitation. The Bland–Altman analysis determined the 95% limit of agreement between (D) the novices and experienced researchers with time limitation (examined the images in 1 min); (E) novices and experienced researchers without time limitation; and (F) novices and the algorithm.
Figure 7. The correlation of the algorithm-determined cell confluence and that determined by (A) experienced researchers with time limitation (examined the images in 1 min) and (B) experienced researchers without time limitation. (C) The correlation of the cell confluence levels was measured by experienced researchers with or without time limitation. The Bland–Altman analysis determined the 95% limit of agreement between (D) the novices and experienced researchers with time limitation (examined the images in 1 min); (E) novices and experienced researchers without time limitation; and (F) novices and the algorithm.
Applsci 10 09146 g007aApplsci 10 09146 g007b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chiu, C.-H.; Leu, J.-D.; Lin, T.-T.; Su, P.-H.; Li, W.-C.; Lee, Y.-J.; Cheng, D.-C. Systematic Quantification of Cell Confluence in Human Normal Oral Fibroblasts. Appl. Sci. 2020, 10, 9146. https://doi.org/10.3390/app10249146

AMA Style

Chiu C-H, Leu J-D, Lin T-T, Su P-H, Li W-C, Lee Y-J, Cheng D-C. Systematic Quantification of Cell Confluence in Human Normal Oral Fibroblasts. Applied Sciences. 2020; 10(24):9146. https://doi.org/10.3390/app10249146

Chicago/Turabian Style

Chiu, Ching-Hsiang, Jyh-Der Leu, Tzu-Ting Lin, Pin-Hua Su, Wan-Chun Li, Yi-Jang Lee, and Da-Chuan Cheng. 2020. "Systematic Quantification of Cell Confluence in Human Normal Oral Fibroblasts" Applied Sciences 10, no. 24: 9146. https://doi.org/10.3390/app10249146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop