I am trying to better understand surrogate pairs and Unicode implementation in Delphi.
Let's get some terminology out of the way.
Each "character" (known as a grapheme) that is defined by Unicode is assigned a unique codepoint.
In a Unicode Transformation Format (UTF) encoding - UTF-7, UTF-8, UTF-16, and UTF-32 - each codepoint is encoded as a sequence of codeunits. The size of each codeunit is determined by the encoding - 7 bits for UTF-7, 8 bits for UTF-8, 16 bits for UTF-16, and 32 bits for UTF-32 (hence their names).
In Delphi 2009 and later, String
is an alias for UnicodeString
, and Char
is an alias for WideChar
. WideChar
is 16 bits. A UnicodeString
holds a UTF-16 encoded string (in earlier versions of Delphi, the equivalent string type was WideString
), and each WideChar
is a UTF-16 codeunit.
In UTF-16, a codepoint can be encoded using either 1 or 2 codeunits. 1 codeunit can encode codepoint values in the Basic Multilingual Plane (BMP) range - $0000 to $FFFF, inclusive. Higher codepoints require 2 codeunits, which is also known as a surrogate pair.
If I call length() on the Unicode string S := 'H?a??V?e' in Delphi, I will get back, 8.
This is because the lengths of the individual characters [H?],[a??],[V?], and [e] are 2, 3, 2, and 1 respectively.
This is because H? has a surrogate, a?? has two additional surrogates, V? has a surrogate and e has no surrogates.
Yes, there are 8 WideChar
elements (codeunits) in your UTF-16 UnicodeString
. What you are calling "surrogates" are actually known as "combining marks". Each combining mark is its own unique codepoint, and thus its own codeunit sequence.
If I wanted to return the second element in the string including all surrogates, [a??], how would I do that?
You have to start at the beginning of the UnicodeString
and analyze each WideChar
until you find one that is not a combining mark attached to a previous WideChar
. On Windows, the easiest way to do that is to use the CharNextW()
function, eg:
var
S: String;
P: PChar;
begin
S := 'H?a??V?e';
P := CharNext(PChar(S)); // returns a pointer to a??
end;
The Delphi RTL does not have an equivalent function. You would have write one manually, or use a third-party library. The RTL does have a StrNextChar()
function, but it only handles UTF-16 surrogates, not combining marks (CharNext()
handles both). So, you could use StrNextChar()
to scan through each codepoint in the UnicodeString
, but you have to loo at each codepoint to know whether it is a combining mark or not, eg:
uses
Character;
function MyCharNext(P: PChar): PChar;
begin
if (P <> nil) and (P^ <> #0) then
begin
Result := StrNextChar(P);
while GetUnicodeCategory(Result^) = ucCombiningMark do
Result := StrNextChar(Result);
end else begin
Result := nil;
end;
end;
var
S: String;
P: PChar;
begin
S := 'H?a??V?e';
P := MyCharNext(PChar(S)); // should return a pointer to a??
end;
I know I would need to do some sort of testing of the individual bytes.
Not the bytes, but the codepoints that they represent when decoded.
I ran some tests using the routine
function GetFirstCodepointSize(const S: UTF8String): Integer
Look closely at that function signature. See the parameter type? It is a UTF-8 string, not a UTF-16 string. This was even stated in the answer you got that function from:
Here is an example how to parse UTF8 string
UTF-8 and UTF-16 are very different encodings, and thus have different semantics. You cannot use UTF-8 semantics to process a UTF-16 string, and vice versa.
Is there a reliable way in Delphi to determine where an element in a Unicode String starts and ends?
Not directly. You have to parse the string from the beginning, skipping elements as needed until you reach the desired element. Remember that each codepoint may be encoded as either 1 or 2 codeunit elements, and each logical glyph may be encoded using multiple codepoints (and thus multiple codeunit sequences).
I know my terminology using the word element may be off, but I don't think codepoint and character are right either, particularly given that one element may have a codepoint size of 3, but have a length of only one.
1 glyph is comprised of 1+ codepoints, and each codepoint is encoded as 1+ codeunits.
Could someone implement the following function?
function GetElementAtIndex(S: String; StrIdx : Integer): String;
Try something like this:
uses
SysUtils, Character;
function MyCharNext(P: PChar): PChar;
begin
Result := P;
if Result <> nil then
begin
Result := StrNextChar(Result);
while GetUnicodeCategory(Result^) = ucCombiningMark do
Result := StrNextChar(Result);
end;
end;
function GetElementAtIndex(S: String; StrIdx : Integer): String;
var
pStart, pEnd: PChar;
begin
Result := '';
if (S = '') or (StrIdx < 0) then Exit;
pStart := PChar(S);
while StrIdx > 1 do
begin
pStart := MyCharNext(pStart);
if pStart^ = #0 then Exit;
Dec(StrIdx);
end;
pEnd := MyCharNext(pStart);
{$POINTERMATH ON}
SetString(Result, pStart, pEnd-pStart);
end;