1. Definition
In computer science, reflection is the process by which a computer program can observe and modify its own structure and behavior. In other words, the reflection-oriented programming paradigm adds that program instructions can be modified dynamically at runtime and invoked in their modified state. The program architecture itself can be decided at runtime based upon the data, services, and specific operations that are applicable at runtime.
The reflective programming paradigm introduces the concept of meta-information, which keeps knowledge of program structure. Meta-information stores information such as the name of the contained methods, the name of the class, the name of parent classes, and/or what the compound statement is supposed to do. Without such information, it’s very obscure or impossible to accomplish.
2. Reflection in .Net
Before diving into the reflection, there are two concepts need to introduce first: metadata and type. Metadata is used to describe the component contracts in .net framework. Types are the building block of every CLR program. The description of a CLR type resides in the metadata. Reflection is achieved base on the metadata.
2.1 Metadata
The CLR begins its life on earth with a fully specified format for describing component contracts. This format is referred to generically as metadata. CLR metadata is machine-readable, and its format is fully specified. Additionally, the CLR provides facilities that let programs read and write metadata without knowledge of the underlying file format. CLR metadata is cleanly and easily extensible via custom attributes, which are themselves strongly typed. CLR metadata also contains component dependency and version information, allowing the use of a new range of techniques to handle component versioning.
Metadata describes all classes and class members that are defined in the assembly, and the classes and class members that the current assembly will call from another assembly. The metadata for a method contains the complete description of the method, including the class (and the assembly that contains the class), the return type and all of the method parameters.
A compiler for the common language runtime (CLR) will generate metadata during compilation and store the metadata (in a binary format) directly into assemblies and modules. Metadata in .NET cannot be underestimated. Metadata allows us to write a component in C# and let another application use the metadata from Visual Basic .NET. The metadata description of a type allows the runtime to layout an object in memory, to enforce security and type safety, and ensure version compatibilities. The CLR postpones the decisions regarding in-memory representations until the type is first loaded at runtime. It makes the assemblies in .NET fully self-describing. This allows developers to share components across languages and eliminates the need for header files.
2.2 Type at Runtime
As you know, CLR-based programs are built out of one or more molecules called assemblies. Furthermore, these assemblies are themselves built out of one or more atoms called modules. The atom of the module can be split into subatomic particles called types. Types are the building block of every CLR program. A CLR type is a named, reusable abstraction. The description of a CLR type resides in the metadata of a CLR module.
Every object in the CLR begins with a fixed-size object header, as in the following figure.
The object header has two fields. The first field of the object header is the sync block index. One uses this field to lazily associate additional resources (e.g., locks, COM objects) with the object. The second field of the object header is a handle to an opaque data structure that represents the object's type. This data structure contains a complete description of the type, including a pointer to the in-memory representation of the type’s metadata. Although the location of this handle is undocumented, there is explicit support for it via the System.RuntimeTypeHandle type. As a point of interest, in the current implementation of the CLR, an object reference always points to the type handle field of the object's header. The first user-defined field is always sizeof(void*) bytes beyond where the object reference points to.
Although the type handle and the data structure it references are largely opaque to programmers working with the CLR, most of the information that is stored in this data structure is made accessible to programmers via the System.Type. Here, we come close to the reflection. Reflection makes all aspects of a type’s definition available to programs, both at development time and at runtime.
2.3 System.Reflection
The following diagram shows the reflection object model.
3. Use of Reflection
In order to maximize the runtime flexibility, you can consider reflection and how it can improve your software. System.Reflection is a great framework because it is the .NET core base of several good practices that revolve around the Dynamic / Plug-In / Dependency Injection / Late-Binding kind of patterns.
There are two categories of organizing typical reflection-centric tasks:
1. Inspection. Inspection entails analyzing objects and types to gather structured information about their definition and behavior.
2. Manipulation. Manipulation uses the information gained through inspection to invoke code dynamically, create new instances of discovered types, or even restructure types and objects on the fly.
For a programmer’s perspective, reflection technology can sometimes blur the conventional distinction between objects and types. For instance, a typical reflection-centric task might be:
1. Start with a handle to an object O and use reflection to acquire a handle to its associated definition, a type T.
2. Inspect type T and acquire a handle to its method, M.
3. Invoke method M on another object, O1.
3.1 Example: Invoke method dynamically
Class Services provides several operations. The Client can send IDRequest and NameRequest to the server with respect to the service that it wants to use. Also, the server allows the client to send multiple sub-requests in one request, which is composite request. The following code is the service definition.
public class Services
{
public IDReply GetID(IDRequest request)
{
return new IDReply();
}
public NameReply GetName(NameRequest request)
{
return new NameReply();
}
public ICollection
{
ICollection
Broker broker = new Broker();
foreach (IRequest request in requestList)
{
IReply reply = broker.ProcessRequest(request);
replyList.Add(reply);
}
return replyList;
}
}
Broker.ProcessRequest() uses Reflection to find out the method with respect to the request.
public IReply ProcessRequest(IRequest request)
{
IReply reply = null;
MethodInfo method = FindMethod(request.GetType());
if (method != null)
{
object[] parameters = new object[1];
parameters[0] = request;
reply = method.Invoke(service, parameters) as IReply;
}
return reply;
}
If this example doesn’t use Reflection here, then the handling logic needs to be hard coded, the codes maybe like this:
public IReply ProcessRequest(IRequest request)
{
if (request is IDRequest)
return service.GetID();
if (request is NameRequest)
return service.GetName();
}
This method can become larger and larger if we continue to add new operations in this class. It’s very easy for us to miss some request logic in this block, too.
In order to eliminate this boring and error-prone work, I determine to use Reflection here.
I use Broker class to dispatch the sub-request. First, all the methods in the Services class can be got by using Type.GetMethods() method, as the following code.
private MethodInfo FindMethod(Type paramType)
{
MethodInfo[] methodList = null;
MethodInfo foundMethod = null;
// Try to find a public method that matches by parameters
methodList = servicesType.GetMethods(BindingFlags.Instance BindingFlags.Public);
foundMethod = FindMethodInList(methodList, paramType);
return foundMethod;
}
Then, I have to find the method that accepts the request type, this is done in the FindMethodInList() method, as the following code.
private MethodInfo FindMethodInList(MethodInfo[] methodList, Type paramType)
{
ParameterInfo[] parameters = null;
MethodInfo foundMethod = null;
foreach (MethodInfo method in methodList)
{
parameters = method.GetParameters();
// Has exactly one parameter
if (MethodAcceptsOneParameter(method,paramType))
{
foundMethod = method;
}
}
return foundMethod;
}
private static bool MethodAcceptsOneParameter(MethodInfo method, Type paramType)
{
parameters = method.GetParameters();
return 1 == parameters.GetLength(0)
&& parameters[0].ParameterType == paramType;
}
In this example, with the help of reflection, the redundant code is reduced, the method need to be invoked can be determined at runtime.
3.2 Example: Implement custom attribute
Attributes can be used to achieve declarative programming. According to the definition from Wiki, a program is "declarative" if it is written in a purely functional programming language, logic programming language, or constraint programming language. In a declarative program you write (declare) a data structure that is processed by a standard algorithm (for that language) to produce the desired result.
.Attributes enhance flexibility in software systems because they promote loose coupling of functionality, custom attribute let users leverage the loose coupling power of attributes for their own purposes. Once we have associated our attribute with various source code elements, we can query the metadata of these elements at run-time by using the .NET Framework Reflection classes. And some specific functions can be added to these classes.
In the NUnit framework, it defines several attributes, such as TestFixtureAttribute and TestAttribute. If the TestFixtureAttribute is applied to one class, then the class is test class; if the TestAttribute is applied to one method, the method is recognized as test method. When the NUnit loads this assembly, it can find the test classes and the test methods. In this example, I define two attributes:
[AttributeUsage(AttributeTargets.Method)]
public class TestMethodAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Class)]
public class TestClassAttribute : Attribute
{
}
TestSuite is define to load the assembly and find all the test classes and test methods in these test classed, as the following code.
public class TestSuite
{
private readonly IList
public TestSuite(string assemblyFile)
{
Assembly currentAssembly = Assembly.ReflectionOnlyLoadFrom(assemblyFile);
foreach (Type type in currentAssembly.GetTypes())
{
if (IsTestClass(type))
{
collections.Add(Activator.CreateInstance(type) as TestFixtureBase);
}
}
}
//Run all the methods marked with [Test] attribute in all test fixtures
public void RunTests()
{
foreach (TestFixtureBase testFixture in collections)
{
Type fixtureType = testFixture.GetType();
foreach (MethodInfo method in fixtureType.GetMethods())
{
if (IsTestMethod(method))
method.Invoke(testFixture, null);
}
}
}
private static bool IsTestClass(Type type)
{
return typeof(TestFixtureBase).IsAssignableFrom(type)
&& (type.GetCustomAttributes(typeof(TestClassAttribute), false).Length > 0);
}
private static bool IsTestMethod(MethodInfo methodInfo)
{
if ( HasTestAttribute(methodInfo))
return true;
else
return false;
}
private static bool HasTestAttribute(MethodInfo methodInfo)
{
object[] testAttrs = methodInfo.GetCustomAttributes(typeof(TestMethodAttribute), true);
if (testAttrs.Length > 0)
return true;
else
return false;
}
}
In the TestSuite constructor, it uses Type.GetExecutingAssembly() method to get the assembly that contains the code that is currently executing. In this example, all the test classes need to extend the TestFixtureBase class, because this base class contains some common methods. Type.IsAssignableFrom() method is used to determine the class extends the TestFixtureBase. Then, it uses Type.GetCustomAttributes() to find out whether the test class uses TestClassAttribute. If it find the test class, it Activator.CreateInstance() to create an instance of this test class, and add it to a list.
In the RunTests() method, it iterates the list and finds the test methods. At last, it uses MethodInfo.Invoke() to invoke the test methods.
4. Drawback
Actually, the drawback comes from its primary intention of doing Late-Binding things. As a result it can't consider code as just raw data. And this leads to many limitations such as:
1. You cannot unload an assembly once it is loaded into an AppDomain by System.Reflection. But you can unload the whole AppDomain in order to unload the assembly.
2. At any time, browsing the code of an assembly loaded with Reflection might trigger a Code Access Security (CAS) exception because the data you’re playing with are still considered as code.
3. It has poor performance (I suspect that CAS security checks plays a major role in this performance issue).
4. It consumes a lot of memory (here also I suspect that it is because the CLR considers data as code) and it is hard to release this memory once you went through all the code of an assembly
5. You cannot load 2 different versions of an assembly inside an AppDomain .
You can also read this great article from Joel Pobar Dodge Common Performance Pitfalls to Craft Speedy Applications if you want more understanding on how Reflection relies internally on cache that makes memory grow and some benchmark in the average performance of Reflection in general. Since .NET v2, System.Reflection supports a kind of read-only mode but most of problems persist with this mode.