# C# 'unsafe' function — *(float*)(&result) vs. (float)(result)

Can anyone explain in a simple way the codes below:

```public unsafe static float sample(){
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);

return *(float*)(&result); //don't know what for... please explain
}
```

Note: the above code uses unsafe function

For the above code, I'm having hard time because I don't understand what's the difference between its return value compare to the return value below:

```return (float)(result);
```

Is it necessary to use unsafe function if your returning *(float*)(&result)?

On .NET a float is represented using an IEEE binary32 single precision floating number stored using 32 bits. Apparently the code constructs this number by assembling the bits into an int and then casts it to a float using unsafe. The cast is what in C++ terms is called a reinterpret_cast where no conversion is done when the cast is performed - the bits are just reinterpreted as a new type.

The number assembled is 4019999A in hexadecimal or 01000000 00011001 10011001 10011010 in binary:

• The sign bit is 0 (it is a positive number).
• The exponent bits are 10000000 (or 128) resulting in the exponent 128 - 127 = 1 (the fraction is multiplied by 2^1 = 2).
• The fraction bits are 00110011001100110011010 which, if nothing else, almost have a recognizable pattern of zeros and ones.

The float returned has the exact same bits as 2.4 converted to floating point and the entire function can simply be replaced by the literal 2.4f.

The final zero that sort of "breaks the bit pattern" of the fraction is there perhaps to make the float match something that can be written using a floating point literal?

So what is the difference between a regular cast and this weird "unsafe cast"?

Assume the following code:

```int result = 0x4019999A // 1075419546
float normalCast = (float) result;
float unsafeCast = *(float*) &result; // Only possible in an unsafe context
```

The first cast takes the integer 1075419546 and converts it to its floating point representation, e.g. 1075419546f. This involves computing the sign, exponent and fraction bits required to represent the original integer as a floating point number. This is a non-trivial computation that has to be done.

The second cast is more sinister (and can only be performed in an unsafe context). The &result takes the address of result returning a pointer to the location where the integer 1075419546 is stored. The pointer dereferencing operator * can then be used to retrieve the value pointed to by the pointer. Using *&result will retrieve the integer stored at the location however by first casting the pointer to a float* (a pointer to a float) a float is instead retrieved from the memory location resulting in the float 2.4f being assigned to unsafeCast. So the narrative of *(float*) &result is give me a pointer to result and assume the pointer is pointer to a float and retrieve the value pointed to by the pointer.

As opposed to the first cast the second cast doesn't require any computations. It just shoves the 32 bit stored in result into unsafeCast (which fortunately also is 32 bit).

In general performing a cast like that can fail in many ways but by using unsafe you are telling the compiler that you know what you are doing.

If i'm interpreting what the method is doing correctly, this is a safe equivalent:

```public static float sample() {
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);

byte[] data = BitConverter.GetBytes(result);
return BitConverter.ToSingle(data, 0);
}
```

As has been said already, it is re-interpreting the int value as a float.

This looks like an optimization attempt. Instead of doing floating point calculations you are doing integer calculations on the Integer representation of a floating point number.

Remember, floats are stored as binary values just like ints.

After the calculation is done you are using pointers and casting to convert the integer into the float value.

This is not the same as casting the value to a float. That will turn the int value 1 into the float 1.0. In this case you turn the int value into the floating point number described by the binary value stored in the int.

It's quite hard to explain properly. I will look for an example. :-)

Edit: Look here: http://en.wikipedia.org/wiki/Fast_inverse_square_root

Re : What is it doing?

It is taking the value of the bytes stored int and instead interpreting these bytes as a float (without conversion).

Fortunately, floats and ints have the same data size of 4 bytes.

Because Sarge Borsch asked, here's the 'Union' equivalent:

```[StructLayout(LayoutKind.Explicit)]
struct ByteFloatUnion {
[FieldOffset(0)] internal byte byte0;
[FieldOffset(1)] internal byte byte1;
[FieldOffset(2)] internal byte byte2;
[FieldOffset(3)] internal byte byte3;
[FieldOffset(0)] internal float single;
}

public static float sample() {
ByteFloatUnion result;
result.single = 0f;
result.byte0 = 154;
result.byte1 = 153;
result.byte2 = 25;
result.byte3 = 64;

return result.single;
}
```

As others have already described, it's treating the bytes of an int as if they were a float.

You might get the same result without using unsafe code like this:

```public static float sample()
{
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);
return BitConverter.ToSingle(BitConverter.GetBytes(result), 0);
}
```

But then it won't be very fast any more and you might as well use floats/doubles and the Math functions.