In ASP.NET MVC applications using Windows Authentication against a domain, we have two possible ways to login:

1) If the user is on a PC added to the domain,  login to the application is not needed becasue it is done automatically with the user’s credentials

2) If the user wants to use the application from a PC outside de domain, he must enter his username/password

In this second case, probably the user wants to logout after using the app. How can we do this ?

As we can’t clear our credentials using .Net code, we must “logout” using Javascript. Thus is, clear the authentication cache.

In the Header section we must add this code:Logout

And in the body section we call this function:

<a href=”../../Default.aspx” onclick=”javascript:Logout();”>[ Salir ]</a>

Notes: this code works fine on IE 6 + SP1 or higher. Otherwise, the user must close all broser’s windows in order to logout.

Hope you find this usefull…..

Be the first to comment
      

Microsoft has provided a number of charting controls that can be downloaded for free and included in .NET 3.5 ASP.NET or WinForms projects. To use them with Visual Studio 2008, one needs to install an add-on that provides VS toolbox and intellisense integration. These controls are now included in .NET 4.0, so an additional download and installation won’t be necessary. Samples Environment for MS Chart Controls contains examples of these controls. Below is a selection of such samples:

Links:

Download Controls

Samples

Be the first to comment
      

jqGrid + Linq + Asp.Net MVC example

By Dario Quintana | Filed in ASP.NET, Asp.net MVC, Linq

For those who want get working these tools/frameworks and achieve a grid like this:

jqgrid linq mvc example

Okey, if you like that, this is the stuff used to get it work:

Download the code example here.

The application was based on the well known example from haacked.com

Be the first to comment
      
Las diversas necesidades de sus clientes encuentran respuesta en el amplio rango de procesadores que AMD le ofrece para todo tipo de PCs. Aquí le mostramos el portafolio de productos actualizado que continuará la transición hacia la nueva línea durante el segundo semestre. Además descubra los diferentes rendimientos que le brindan estas plataformas.
Los recientes lanzamientos han incluido las diversas necesidades que los clientes e integradores manifestaron para su gran variedad de PCs y Notebooks.

El portafolio actualizado para el segundo semestre de este año de procesadores Phenom y Athlon puede verse en el gráfico que acompaña esta nota; como se ve estamos en un proceso de transición en el cual todavía conviven productos de primera generación con los nuevos modelos de Athlon II y Phenom II, proceso de actualización tecnológica que deberá finalizar a fin del año.

También hay algunos datos que los integradores y Resellers deben tener en cuenta a la hora de evaluar cuál es la mejor opción entre estas soluciones para cada necesidad de sus clientes y que se relacionan con el rendimiento y performance de cada uno de estos microprocesadores.

Los detalles
En la familia Athlon se viene la nueva generación de Athlon II, basado en el mismo núcleo que los poderosos Phenom II. En esta opción (AMD Athlon II X2 250), se confirma una serie de mejoras en la performance especialmente relacionadas con la frecuencia (3000 MHz); una mayor velocidad en el bus de datos (4000 Gt/s), y una sensitiva reducción en el consumo energético que caerá hasta los 65 watts, y responderá al proceso de fabricación de 45 nm. Para los memoriosos, estas especificaciones concuerdan con aquel mítico y poderoso Athlon FX (3Ghz, 2Mb cache L2) pero con la ventaja de una arquitectura más moderna, y con la mitad del consumo.

Otro punto interesante es que el AMD Athlon II X2 250 podrá trabajar con memorias DDR2, como en la versión anterior; pero también podrá trabajar con el más reciente estándar DDR3, dando la flexibilidad al cliente de decidir el mejor momento para realizar esta transición.

En otras palabras, los próximos AMD Athlon II X2 250 ofrecerán más performance, más eficiencia y más compatibilidad.

Algunos Benchmarks
La serie de comparativas de rendimiento relacionadas con medios y soportes digitales, los micros Athlon II X2 250, obtienen una significativa ventaja respecto de los X2 anteriores, incluso de modelos relativamente nuevos como el X2 7750, reduciendo a la vez el consumo.

Fuente: ITSitio

Be the first to comment
      

A Microsoft Research team led by Helen J. Wang has created Gazelle (PDF), a browser-based OS, with the declared intent to tighten security when going online.

Gazelle is not a new operating system like Windows but a new type of browser that has a kernel acting as a multi-principal operating system responsible for managing resource protection and sharing resources between various web site principals. A security principal is “an entity that can be authenticated by a computer system or network. Authentication is the process of validating and confirming the identity of such an entity.” Janie Chang, a Microsoft Research team member, defines what a browser principal is and explains why adequate security matters:

In browser parlance, a principal generally equates to a Web site. Given that there is usually just one user at a time on a PC, the sharing of resources is actually across applications from different origins; in the case of Web pages, each page could consist of content from different principals, each staking out a share of computing resources. The browser is therefore the natural choice of application platform for managing principals and resource requests.

A Web page might offer content such as ads or newsfeeds from other Web-site principals. Yet to the browser, all these principals coexist in the same process or protection domain. An ad containing malicious or poorly written code could hog the network connection, degrade performance, freeze the entire page, or crash the browser. In a browser operating system, a “bad” principal would not be allowed to affect other principals, the browser, or the host machine.

Wang et al. define a principal as a web site “defined in the same-origin policy (SOP), which is labeled by a web site’s origin, the triple of (protocol, domain name, port).” To enforce principal protection, a browser kernel is introduced between the principals and the operating system as shown in Figure 1:

gazelle

The browser kernel runs in a separate protection domain and interposes between browser principals and the traditional OS. The browser kernel mediates the principals’ access to system resources and enforces security policies of the browser. Essentially, the browser kernel functions as an operating system to browser principals and manages the protection and sharing of system resources for them. The browser kernel also manages the browser chrome, such as the address bar and menus. The browser kernel receives all events generated by the underlying operating system including user events like mouse clicks or keyboard entries; these events are then dispatched to the appropriate principal instance. When the user navigates a window by clicking on a hyperlink that points to an URL at a different origin, the browser kernel creates the protection domain for the URL’s principal instance (if one doesn’t exist already) to render the target page, destroys the protection domain of the hyperlink’s host page, and re-allocates and re-initializes the window to the URL’s principal instance.

Wang et al. compare current security measures existing in Google Chrome with those implemented in Gazelle. Google Chrome has the following process models: monolithic process, process-per-browsing-instance, process-per-site-instance, and process-per-site. A browsing instance is made up of all interconnected windows, frames and sub-frames, while a site instance is a collection of pages coming from the same site and existing within a browsing instance. Finally, a site is defined as “set of SOP origins that share a registry-controlled domain name: for example, attackerAd.socialnet.com, alice.profiles.socialnet.com, and socialnet.com share the same registry-controlled domain name socialnet.com, and are considered to be the same site or principal by Chrome.” According to Wang et al.,

Chrome uses the process-per-site-instance model by default. Furthermore, … Chrome’s current implementation does not support strict site isolation in the process-per-site instance and process-per-site models: embedded principals, such as a nested iframe sourced at a different origin from the parent page, are placed in the same process as the parent page. The monolithic and process-per-browsing-instance models in Chrome do not provide memory or other resource protection across multiple principals in a monolithic process or browser instance. The process-per site model does not provide failure containment across site instances. Chrome’s process-per-site-instance model is the closest to Gazelle’s two processes-per principal- instance model, but with several crucial differences: (1) Chrome’s principal is site (see above) while Gazelle’s principal is the same as the SOP principal. (2) A web site principal and its embedded principals co-exist in the same process in Chrome, whereas Gazelle places them into separate protection domains. Pursuing this design led us to new research challenges including cross-principal display protection. (3) Plugin content from different principals or sites share a plugin process in Chrome, but are placed into separate protection domains in Gazelle. (4) Chrome relies on its rendering processes to enforce the same-origin policy among the principals that co-exist in the same process. These differences indicate that in Chrome, cross-principal (or – site) protection takes place in its rendering processes and its plugin process, in addition to its browser kernel. In contrast, Gazelle’s browser kernel functions as an OS, managing cross-principal protection on all resources, including display.

Comparing Gazelle with IE8, Wang et al. notice that

IE 8 uses OS processes to isolate tabs from one another. This granularity is insufficient since a user may browse multiple mutually distrusting sites in a single tab, and a web page may contain an iframe with content from an untrusted site (e.g., ads).

The overall conclusion of the research team is:

Fundamentally, Chrome and IE 8 have different goals from that of Gazelle. Their use of multiple processes is for failure containment across the user’s browsing sessions rather than for security. Their security goal is to protect the host machine from the browser and the web; this is achieved by process sandboxing. Chrome and IE 8 achieved a good milestone in the evolution of the browser architecture design. Looking forward, as the world creates and migrates more data and functionality into the web and establishes the browser as a dominant application platform, it is critical for browser designers to think of browsers as operating systems and protect web site principals from one another in addition to the host machine. This is Gazelle’s goal.

A Gazelle prototype has been built on IE7 using its backward compatibility parsing, DOM management and JavaScript engine. The performance of the browser is reported as comparable with that of IE8 and Google Chrome. Cross-origin script source protection is addressed by using the architecture shown in Figure 2. The idea is to sandbox the plug-in code in order to isolate any malicious activities and also let the browser run in case the plug-in fails.

This research project raises many eyebrows of those fearing Microsoft has not ceased their plan to fully incorporate a browser in the operating system. Such a move would certainly be a major blow to many companies because the browser tends to be the dominant application on the desktop. Microsoft assures us that this is not their intent but rather to increase browsing security. Currently, Gazelle is just a research project and only time will show if it will become a product or at least if it will be incorporated in IE and what room is left for other browsers and online applications running on Windows.

Fuente: InfoQ

Be the first to comment
      

Microsoft OracleClient Deprecated

By Gonzalo | Filed in dotNet

Microsoft announced System.Data.OracleClient will be deprecated after .NET 4.0. Classes in the namespace will be marked obsolete in .NET 4.0 and removed from future releases. OracleClient is the ADO.NET provider for Oracle developed by Microsoft and shipped as part of the .NET Framework Class Library.

This decision has sparked controversy among the community of .NET developers working with Oracle. While many enterprise .NET applications use a 3rd party Oracle provider, System.Data.OracleClient is often used in small applications and typically has better integration with other Microsoft tools.

Microsoft insists this decision was made after much deliberation and research:

After carefully considering all the options and talking to our customers, partners, and MVPs it was decided to deprecate OracleClient as a part of our ADO.NET roadmap.

Part of the reasoning for this decision is the increasingly availability and improvements of 3rd party ADO.NET data providers for Oracle. There have been significant performance improvements and enhanced multi-version compatibility among the popular Oracle providers:

Despite the community backlash, Microsoft has shown no signs of reversing the decision, and none should be expected. Microsoft’s official line is that “many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft.” Therefore, it’s not worth the investment to bring OracleClient up to parity with the third party providers, and this frees MS resources to focus on ADO.NET.

Some are calling this move an underhanded strike to raise the bar for .NET development against Oracle, but many are viewing this decision with a guarded optimism that Microsoft might be rethinking its not-invented-here bias. For example, it is a common complaint among the ALT.NET crowd that Microsoft reinvents the wheel for no reason, creating their own versions of products when good alternatives already exist. See MSTest vs. NUnit or Entity Framework vs. NHibernate. In the context of recent decisions to officially support jQuery and provide the source code for ASP.NET MVC, this decision could be interpreted as Microsoft further admitting that it doesn’t need to control the entire stack; they can rely on their partners and the community to provide some pieces.

Fuente: InfoQ

Be the first to comment
      

¡Finalmente llegó Firefox 3.5! Luego de varias versiones casi listas que se fueron lanzando las últimas semanas, hoy se ha lanzado oficialmente la versión terminada y apta para consumo público de nuestro navegador favorito.

Firefox 3.5

Aparte de todas las novedades que trae la nueva versión del navegador, una de las más interesantes para la comunidad Latinoamericana es la suma de dos nuevas localizaciones oficiales del navegador. Para sumarse a la versión para España y para Argentina de Firefox, ha llegado una versión específica para Chile y otra para México. Ahora finalmente para quienes nos importa la diferencia entre Descargá y Descarga podremos usar el principal navegador libre localizado a nuestra variante del idioma español.

A continuación la lista de mejoras de Firefox 3.5 y los enlaces para descargar las versiones localizadas.

Esta es la lista de mejoras copiada y pegada desde el sitio de la Fundación Mozilla:

Firefox 3.5 está basado en la plataforma de representación Gecko 1.9.1, que ha estado en desarrollo desde el año pasado. Firefox 3.5 ofrece muchos cambios sobre la versión anterior, implementando nuevas tecnologías web, y mejorando el rendimiento y facilidad de uso. Algunas de las características más notables son.

* Disponible en más de 70 idiomas (¡consigue tu versión localizada!)
* Implementación de los elementos y de HTML 5, incluyendo audio codificado en Vorbis y vídeo codificado en Ogg Theora de manera nativa (¡Pruébalo aquí!)
* Herramientas mejoradas para controlar tus datos privados, incluyendo un modo privado de navegación.
* Mejor rendimiento en aplicaciones web usando el nuevo motor de JavaScript TraceMonkey.
* La capacidad de compartir tu posición geográfica con sitios web usando navegación basada en la ubicación (¡Pruébalo aquí!)
* Implementación de JSON nativo, y múltiples hilos de ejecución web (web worker threads).
* Mejoras al motor de representación Gecko, incluyendo interpretación especulativa para una representación del contenido más rápida.
* Implementación de nuevas tecnologías web tales como: tipografías descargables, consultas de medios CSS, nuevas transformaciones y propiedades, selectores de consutas JavaScript, almacenamiento local de HTML5 de aplicaciones en modo sin conexión, texto , perfiles ICC, y transformaciones SVG.

Sin contar que según Mozilla, Firefox 3.5 es 2 veces más rápido que Firefox 3 y hasta 10 veces más rápido que Firefox 2 interpretando JavaScript, algo cada vez más común en sofisticadas aplicaciones web como Gmail.

Fuente: RedUsers

Be the first to comment
      

NHibernate 2.1alpha3 ready

By Dario Quintana | Filed in NHibernate

Yes, NHibernate 2.1alpha3 is ready to download. Have a look the release notes.

You can download here. See the official announce.

Be the first to comment
      

Since a few days a I’m having a close look to LinFu 2.2 and talking with Philip Laureano, the creator of this amazing framework. LinFu 2.2 is very different than LinFu 1. Now is talking directly with Mono.Cecil to modify types in runtime, weave types for friends :-)

With LinFu 2.2 you can do:

  • Property interception
  • New operator interception
  • Third Party methods interception

This interception is made without proxies, the assembly is changed in runtime, and then, you’re able to add hooks whatever you want.

LinFuEngine linfu = new LinFuEngine("MethodInterception.exe");

var targetType = linfu.GetType<Foo>();
var typeName = targetType.Name;

targetType.InterceptMethodCalls(t => t.Name.Contains(typeName),
                m => m.DeclaringType.Name.Contains(typeName) && m.Name == "DoSomething",
        methodCall => methodCall.DeclaringType.Name == "Console" && methodCall.Name == "WriteLine");

var instance = linfu.CreateInstanceModified<Foo>();

var host = (IMethodReplacementHost) instance;

var interceptor = new Interceptor();

host.MethodReplacementProvider = new MethodInterceptorProvider(interceptor);
MethodInfo targetMethod = linfu.CreateModifiedType<Foo>().GetMethod("DoSomething");
targetMethod.Invoke(instance, null);

Console.WriteLine("Was intercepted :{0}", interceptor.HasBeenCalled);

I was trying to wrap a little to LinFu because right now is low level API, so to get the complete code you can check out here.

In this example we are modifying the assembly and Interception the Console.WriteLine method. The example still being low level but you’ve to be patience, LinFu is not released yet, and you can begin using it and proposing features to make it easier.

NHibernate is using LinFu 1 for dynamic proxy creation. I’m working (Philip also is helping a lot !) to do LinFu2.2 make works with NHibernate an pass the whole suite of NHibernate.

Hope you find this interesting. For more info, catch in the Philip blog !

Be the first to comment
      

Asp.net MVC let you intercept the actions via a feature they call: Filters. Filters are just attributes that you decorate into actions and them allow to you make an stop before or after the action-methods are executed (also are filters to intercept code before/after the result is executed, errors are thrown, etc). There are a few kind of Filter attributes, in the picture bellow into the blue square you’ve what filters came supported out-of-the-box.

Today we are going to talk about the AuthorizeAttribute and how to extend and test it.

AuthorizeAttribute (from the Msdn):

Represents an attribute that is used to restrict access by callers to an action method.

This filter is the first to be called when the Controller Action Invoker try to run Action methods. You can set into the attribute which are the Users or Roles can execute an Action, if the User/Role doesn’t fulfill with what was established into the Action an Unauthorized Result is raised. Remember that exactly after the filter execution ended in a ActionResult, that result is executed.

Now, let’s go to the hypothetical scenario that you need a custom authorization schema, that you need more than User/Role, or you just need neither of both, because your model is based on a Security Level. So you don’t care about who is the guy? / or what role it has?, you just need the security level it has.

With this scenario you can write a custom Authorize attribute:

In this CustomAuthorize attribute, we are doing first the well known authorization (which is, executing the code as is in the class base). When that authorization part passed, we go through our custom authorization part: we get the user from the Database (or whatever other source) and we check the security level. If it’s allowed to execute the action-method we end without setting the result. If it’s not, we set a HttpUnauthorizedResult. In the browser you will be redirected to the login page if you’re not allowed to execute that code.

Now the problem is when you need to test the authorization. Actually you can do it with some mock and overriding some code.

Another thing you’ve to know is that the object in charge of execute the actions is the ControllerActionInvoker. Then to invoke and action into your tests and see the result when the filters are invoked, we need to customize some, and override the method in charge of execute the result (the ActionResult), which is InvokeActionResult. Where is how should looks our method override:

Assert is a class from a Unit Testing framework, in this case in the example is using the Unit Testing framework that came with Visual Studio because every user can run the tests without have ie: Resharper installed. This Assert is expecting that the Result from the filter/action execution is the same with the TResult (is a generic parameter declared into the class). So with this class, we can make an easy test to see if the result is authorized or unauthorized.

Our test for authorized access, should be looking like this

First we are creating a new Controller, we mock stuff for authentication, and then using our custom action invoker to try to invoke the action using InvokeAction method (passing the Context and the Action name to be executed).

We are using some extensions and helpers methods, i.e.: SetFakeAuthenticatedContext is included into the example and there you’ll see which elements you need to mock when use Authorize filter-attributes.

To understand what happened in this first test method:

  1. Controller creation
  2. Mock authentication stuff using our user named:’pepe’.
  3. Using the custom invoker we launch the action ‘PermisiveAction’.
  4. The CustomAuthorized filter is raised, it pass the authorization.
  5. The action is executed and return a ViewResult (doing return View() into the code).
  6. The Assertion is made, everything ok and the test pass.

And now let’s see this another test

The difference with the previous one is in the step 4, 5 and 6. The action is not executed because the filter raise an HttpUnauthorizedResult. Download the example to understand better how to manage the testing of Authorization on Actions.

Download code example

Be the first to comment