This seems to be the design intent -- as with Newtonsoft, JavaScriptSerializer
and DataContractJsonSerializer
, the dictionary keys and values are serialized, not the regular properties.
As an alternative to extending Dictionary<TKey, TValue>
, you can get the JSON you want by encapsulating a dictionary in a container class and marking the dictionary with JsonExtensionDataAttribute
:
internal class Translation
{
public string Name { get; set; }
[JsonExtensionData]
public Dictionary<string, object> Data { get; set; } = new Dictionary<string, object>();
}
And then serialize as follows:
var translation = new Translation
{
Name = "donkey",
Data =
{
{"key1", "value1"},
{"key2", "value2"},
{"key3", "value3"},
},
};
var options = new JsonSerializerOptions
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
// Other options as required
WriteIndented = true,
};
var json = JsonSerializer.Serialize(translation, options);
Do note this restriction from the docs
The dictionary's TKey value must be String, and TValue must be JsonElement or Object.
(As an aside, a similar approach would work with Newtonsoft which has its own JsonExtensionDataAttribute
. If you are using both libraries, be sure not to get the attributes confused.)
Demo fiddle #1 here.
If this modification to your data model is not convenient, you can introduce a custom JsonConverter<Translation>
that (de)serializes a DTO like the model above, then maps the DTO from and to your final model:
internal class Translation : Dictionary<string, string>
{
public string Name { get; set; }
}
internal class TranslationConverter : JsonConverter<Translation>
{
internal class TranslationDTO
{
public string Name { get; set; }
[JsonExtensionData]
public Dictionary<string, object> Data { get; set; }
}
public override Translation Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
{
var dto = JsonSerializer.Deserialize<TranslationDTO>(ref reader, options);
if (dto == null)
return null;
var translation = new Translation { Name = dto.Name };
foreach (var p in dto.Data)
translation.Add(p.Key, p.Value?.ToString());
return translation;
}
public override void Write(Utf8JsonWriter writer, Translation value, JsonSerializerOptions options)
{
var dto = new TranslationDTO { Name = value.Name, Data = value.ToDictionary(p => p.Key, p => (object)p.Value) };
JsonSerializer.Serialize(writer, dto, options);
}
}
And then serialize as follows:
var translation = new Translation
{
Name = "donkey",
["key1"] = "value2",
["key2"] = "value2",
["key3"] = "value3",
};
var options = new JsonSerializerOptions
{
Converters = { new TranslationConverter() },
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
// Other options as required
WriteIndented = true,
};
var json = JsonSerializer.Serialize(translation, options);
I find it simpler to (de)serialize to a DTO rather than to work directly with Utf8JsonReader
and Utf8JsonWriter
as edge cases and naming policies get handled automatically. Only if performance is critical will I work directly with the reader and writer.
With either approach JsonNamingPolicy.CamelCase
is required to bind "name"
in the JSON to Name
in the model.
Demo fiddle #2 here.