HOME C C++ PYTHON JAVA HTML CSS JAVASCRIPT BOOTSTRAP JQUERY REACT PHP SQL AJAX JSON DATA SCIENCE AI

Set Decimal Precision


Setting decimal precision in C programming involves controlling how many decimal places are displayed for floating-point numbers (like float and double) when using printf or other similar formatting functions. Here are the main approaches:

Example

#include

int main() {

float myFloatNum = 3.5;

double myDoubleNum = 19.99;


printf("%f\n", myFloatNum);p>

printf("%lf", myDoubleNum);

return 0;

}


Output

3.500000

19.990000


If you want to remove the extra zeros (set decimal precision), you can use a dot (.) followed by a number that specifies how many digits that should be shown after the decimal point:

Example

#include

int main() {

float myFloatNum = 3.5;


printf("%f\n", myFloatNum); // Default will show 6 digits after the decimal point

printf("%.1f\n", myFloatNum); // Only show 1 digit

printf("%.2f\n", myFloatNum); // Only show 2 digits

printf("%.4f", myFloatNum); // Only show 4 digits

return 0;

}


Output

3.500000

3.5

3.50

3.5000