What do you mean by normalisation of pointers

Questions by manish

Showing Answers 1 - 4 of 4 Answers

Ahtesham Ali

  • Jun 20th, 2006
 

Normalization functions are provided which take any of these combined types as their first argument, a pointer to the same combined type as their second argument, and place the normalization of the first into the second.

  Was this answer useful?  Yes

kbjarnason

  • Jul 2nd, 2010
 

Generally, this term applies only in ancient DOS-based compilers when using far or huge pointers.

In 16-bit DOS compilers, the default ("near mode/small mode") is to have all the data in one segment, with addresses sorted out just using offsets into the 64K segment.  If you use far/huge pointers, which aren't limited to the one segment, you have to take into account not just the offset, but the segment as well.

Thus, char *nptr; would be by default a "near" pointer, consisting of just an offset into the default data segment, where char far *fptr; would be a far pointer, with a segment value attached as well.

To access data via the far pointer, the compiler has to first set the active data segment to that of the far pointer, then apply the offset, then read or write the appopriate bit of memory.

Simple enough, but there's a catch.  Due to quirks of the x86 design, segments may be 64K in size, but they can _start on_ any 16-byte boundary.

What this means is that two "far" pointers, even if they point to the exact same spot in physical machine memory, may have different values.  One could have (for example) segment 0, offset 18, the other could have segment 1, offset 2.  Each points to the 18th byte of physical memory, but the pointers are simply not identical.

What, then, happens if you need to compare these pointers?  What if you're trying to ensure you don't run off the end of a buffer, for example?  You need to compare your "working" pointer against the buffer's "base" pointer, plus its size, but if the pointers use different segment values and offsets to mean the same address, the comparison won't work worth spit.

As a result, you - or the compiler - has to "normalize" the pointers.  This means it has to ensure that the relevant pointers (your working pointer and the buffer base pointer, in this case) use the _same_ starting segment value, with whatever offset is appropriate to each, and _then_ compare them.  This process is called "normalization".

Here's another catch: doing this sucks CPU cycles.  Not much for a single evaluation, but enough that if every pointer access were normalized, the code could slow down significantly.  Thus, they don't do it by default.

With "huge" pointers, things are a little different.  See, far pointers, while the include a segment value, are limited to accessing a single segment; their "offset" value is only 16 bits, so it wraps at 64K.  Nice for some purposes, not so nice for others.  It's somewhat efficient, at least, unless you're actually comparing pointers.

By contrast, huge pointers can span the entire memory, all segments.  The problem there is that the "offset" portion of the pointer address is _still_ only 16 bits, so _still_ wants to wrap on 64K boundaries, which defeats the purpose.  So, with huge pointers, _every_ modification of the pointer value results in it being normalized, with as much information as possible put into the "segment" portion.

As a result, the huge pointer never wraps at a 64k boundary, because its segment value has long since been updated to a segment "further up the line" and its offset reduced accordingly.

This means the huge pointers can span all available memory, but man, they crawl compared to everything else.  All that extra code really bogs things down.

  Was this answer useful?  Yes

Give your answer:

If you think the above answer is not correct, Please select a reason and add your answer below.

 

Related Answered Questions

 

Related Open Questions