2024-03-29T11:56:55Zhttps://riull.ull.es/oai/requestoai:riull.ull.es:915/290792022-11-17T11:35:11Zcom_915_668com_915_488col_915_678
Spectral converting nanoparticles: production, structural and spectroscopic study
Lorenzo Álvarez, Jorge
Yanes Hernández, Ángel Carlos
Castillo Vargas, Francisco Javier del
Grado En Física
El uso de nanopart´ıculas se remonta a la antig¨uedad en la b´usqueda del embellecimiento
de piezas art´ısticas o de alfarer´ıa, en las que los artesanos utilizaban, de forma consciente o
no, finos polvos de oro u otros metales que hoy en d´ıa se clasificar´ıan como nanopart´ıculas.
No fue hasta el siglo XX cuando se acu˜n´o el nombre de ”materiales nanoestructurados” y
desde entonces se han desarrollado innumerables m´etodos de obtenci´on y caracterizaci´on,
quedando atr´as las rudimentarias t´ecnicas del pasado. En los ´ultimos a˜nos, se ha tomado
mayor conciencia del gran potencial que presenta las peque˜nas escalas de la materia,
desde su uso en electr´onica e iluminaci´on con la creaci´on de bombillas LEDs, hasta su
uso en medicina para el tratamiento localizado de enfermedades. Actualmente, el foco
se encuentra sobre materiales dopados con iones de tierras raras, debido a sus especiales
propiedades luminiscentes. En particular, destaca el fen´omeno de up-conversion con potenciales
aplicaciones fotocatal´ıticas, como la producci´on de hidr´ogeno (water-splitting) o
la eliminaci´on de contaminantes en aguas; as´ı como aplicaciones m´edicas como la terapia
fotodin´amica.
En este Trabajo de Fin de Grado (TFG) titulado ”Spectral converting nanoparticles:
production, structural and spectroscopic study” se presenta la obtenci´on y caracterizaci´on
estructural y espectrosc´opica de nanopart´ıculas (NPs) de Sr2YbF7, que han sido dopadas
con diferentes iones: Eu3+ (5 %), Er3+ (1 %), Tm3+ (0.75 %) y Gd3+ (50 %). Adem´as, se
ha evaluado su posible uso en procesos de fotocat´alisis y en terapia fotodin´amica.
Las nanopart´ıculas se han sintetizado por el m´etodo solvotermal, el cual requiere de
altas presiones y temperaturas moderadas (≤200 oC), as´ı como de disolventes org´anicos
y surfactantes, consiguiendo una composici´on, morfolog´ıa y tama˜nos deseados. Con este
m´etodo tambi´en se han obtenido de forma satisfactoria estructuras Core@Shell (Sr2YbF7:
RE3+@Sr2YF7) de diferentes tama˜nos, aumentando de forma considerable la emisi´on de
las NPs.
Tras la obtenci´on de estas NPs, se caracterizaron estructuralmente mediante diferentes
t´ecnicas: difracci´on de rayos X (XRD), microscop´ıa electr´onica de transmisi´on (TEMHRTEM)
y espectroscop´ıa dispersiva de rayos X (EDS). Los patrones de XRD se identificaron
con la fase tetragonal de la matriz Sr2YF7, con tama˜nos entre 10 y 15 nm (calculados
con la ecuaci´on de Scherrer). Adem´as, las im´agenes de TEM-HRTEM confirmaron los resultados
obtenidos por XRD y mostraron una distribuci´on uniforme de tama˜nos. Por
´ultimo, un an´alisis de EDS da cuenta de la presencia de Sr, Yb y F como principales constituyentes
de las NPs, as´ı como la correcta proporci´on estequiom´etrica (2:1:7) esperada
de la matriz Sr2YbF7.
El an´alisis espectrosc´opico de los materiales obtenidos permiti´o completar la caracterizaci
´on estructural, adem´as de estudiar los diferentes mecanismos de transferencia de
energ´ıa que tienen lugar en ellos. En particular, se estudi´o la emisi´on de los materiales
bajo su excitaci´on con radiaci´on infrarroja, adquiriendo un mayor conocimiento de los
procesos up-conversion.
Los espectros de emisi´on y excitaci´on de las muestras dopadas con 2Eu3+-0.75Tm3+
revelaron la presencia de los iones dopantes en sitios centrosim´etricos (se estudi´o la relaci´on
entre las intensidades de las transici´on 5D0 → 7F2 y 5D0 → 7F1, conocida como ratio de
asimetr´ıa, R, obteniendo un valor de R=0.25). Mediante espectroscop´ıa de absorci´on en
el infrarrojo, se revel´o la presencia de mol´eculas de ´acido oleico en la superficie de las
NPs. Adem´as, se destac´o la importancia del recubrimiento de las NPs con capas inertes
de Sr2YF7 en la eficiencia de la luminiscencia.
Para las NPs dopadas con 0.75Tm3+ y 50Gd3+-0.75Tm3+, se observan procesos upconversion
(UC) muy eficientes, con intensas emisiones ultravioletas (UV). A partir del
recubrimiento con capas inertes de Sr2YF7, se observ´o un aumento de la emisi´on global
de hasta 36 y 33 veces (respecto a las muestras sin recubrir) de las NPs dopadas con
0.75Tm3+ y 50Gd3+-0.75Tm3+ respectivamente.
Adem´as, a partir del estudio de la intensidad de emisi´on en relaci´on con la potencia
de la radiaci´on incidente, se revel´o la presencia de un mecanismo de competencia entre
el decaimiento lineal de los iones emisores y los procesos de up-conversion para despoblar
los estados excitados intermedios. En este estudio tambi´en se observaron algunos efectos
relacionados con la concentraci´on de iones Yb3+ en los materiales estudiados.
Para las NPs dopadas con 1Er3+, tambi´en se observaron procesos UC muy eficientes
con intensas emisiones en el visible (VIS), en particular la emisi´on roja en 660 nm, as´ı
como un gran incremento de la emisi´on global al ser recubierto por capas inertes, de
hasta 143 veces respecto a las muestras sin recubrir. Adem´as, se ha adquirido un mayor
entendimiento de los diferentes mecanismos de transferencia de energ´ıa presentes en estos
materiales.
Por otro lado, se estudi´o el potencial uso de las intensas emisiones UV que presentan
las muestras dopadas con 0.75Tm3+ y 50Gd3+-0.75Tm3+ en procesos fotocatal´ıticos. Para
ello se estudi´o la fotodegradaci´on de azul de metileno (MB), obteniendo valores del 38%
y del 19% para las NPs dopadas con 0.75Tm3+ y 50Gd3+-0.75Tm3+ respectivamente.
En vista de los resultados, se han propuestos diferentes estrategias de cara a mejorar los
procesos de fotocat´alisis.
Por ´ultimo, la intensa emisi´on roja de las NPs dopadas con 1Er3+ suguieren su potencial
uso en terapia fotodin´amica. Para ello, se ensamblaron a estas NPs mol´eculas de
α-ciclodextrina (α-CD), con el objetivo de poder dispersarlas en soluciones acuosas; y,
posteriormente, mol´eculas de azul de metileno (MB) como fotosensibilizadoras. Se realiz´o
un estudio espectrosc´opico del conjunto, obteniendo prometedores resultados que indican
una eficiente transferencia de energ´ıa desde los iones de Er3+ a las mol´eculas de MB.
2022-07-19T09:56:46Z
2022-07-19T09:56:46Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29079
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/206632021-11-05T09:01:55Zcom_915_668com_915_488col_915_678
Equation of state and Rankine-Hugoniot shock relations for realistic dynamical processes in stellar atmospheres
Koll Pistarini, Matías
Moreno Insertis, Fernando
The complexity of many of the processes that take place in the solar atmosphere and interior has led
to the development of large, often multidimensional, numerical models to understand in detail the underlying physics. The study of phenomena from jets, surges and spicules, all the way to coronal mass
ejections (CME´s) or solar flares, of structures such as coronal arcs or of processes such as convection
or wave propagation, requires combining aspects of hydrodynamics, electromagnetism, plasma physics
and radiative transport. The development of numerical codes allows to solve in detail the equations
of those fields together. One of the basic aspects of these numerical codes is the equation of state
(EOS) that they implement. The EOS can be very simple, or very complex, depending on the degree
of realism needed to obtain relevant results.
A code at the forefront of the study of dynamical processes in the solar atmosphere is the Bifrost code,
developed at the University of Oslo. The fundamental EOS implemented in Bifrost deals with an ideal
gas with ionization/recombination and molecular formation/dissociation processes; it is realistic and
complex, and contains detailed microphysics. In this Graduation Thesis, we study the Bifrost EOS
from a double perspective: on the one hand, we carry out a detailed characterization of the EOS,
covering a large variety of aspects of the thermodynamics of partially ionized gases; on the other
hand, we study the shock transitions, in particular the Hugoniot curves, allowed by the EOS. Concerning the first aspect, the motivation to carry out the characterization is that the Bifrost EOS is not
well documented in the literature, which may constitute a problem for users of the code. Concerning
the second, one cannot find publications that describe the general behavior of shocks calculated with
this EOS. Given the ionization/recombination and molecular formation/dissociation processes taking
place in the solar atmosphere and included in the EOS, this study can be important to understand
the properties of shocks in the numerical models.
In the first part, the characterization of the EOS is carried out by calculating thermodynamic quantities on the basis of the tables of temperature and pressure as a function of density and internal energy
obtained from the EOS. For all the numerical calculations throughout this work original Python programs have been developed independently of Bifrost’s own program suite. To begin with, quantities
that do not require advanced numerical methods, such as atomic mass per particle or specific heat at
constant volume, are determined. Next, we calculate more advanced quantities, such as the entropy
distribution, ionization or dissociation coefficients, the Chandrasekhar adiabatic coefficients, the adiabatic gradient, the specific heat ratio γ, and the speed of sound, most of which require high-order
integrations or interpolations in one or two dimensions. The results are compared to those obtained
for the equation of state for the simplest ideal gases, i.e., those with no changes in chemical composition, showing the importance of taking into account the ionization and molecular formation processes.
In the case of entropy, the integration methods used are discussed, and the results compared with
the adiabatic curves facilitated by Dr H Ludwig obtained in the context of the COBOLD code, thus
verifying the validity of our results. Additionally, a detailed analytical expression for the internal
energy of the gas is developed that includes all the ionization levels for Hydrogen and Helium and the
formation of H2 molecules. The obtained formula is tested by making calculations in regions where
the chemical composition does not vary, obtaining an excellent fit to the general curves obtained from
the Bifrost EOS table in those regions. In this way, we now have at our disposal detailed information
about the ionization or molecular formation processes for H and He that take place in the different
ranges of density and internal energy (or pressure and temperature) in the solar atmosphere.
In the second part of the work, the jumps of pressure, temperature and density across a shock, the
corresponding increase in entropy and the incoming Mach numbers allowed by the Bifrost EOS are
studied in detail. For this, the Rankine-Hugoniot jump relations are used and the corresponding
Hugoniot curves are obtained, comparing the results to the well known ones for simple ideal gases.
First, analytical expressions are derived for the jumps allowed when the component of the internal
energy associated with ionization and molecular dissociation processes is uniform in the local thermodynamical domains where the pre-shock and post-shock states are located. This leads to unexpected
results when that component has a different value before and after the shock transition. Also, an analytical expression for the derivative of post-shock pressure with respect to post-shock density along
the Hugoniot curve is calculated in the general case. Then, a program is created to calculate Hugoniot
and Mach number curves numerically for the general case. To illustrate the results, curves starting at
pre-schock states in five regions of interest are calculated. Those curves have entry states located in
regions of simple ideal gas, but are such that, along their path, cross bands of ionization or molecular
dissociation. The crossing gives rise to striking consequences: the density jump can become much
larger in those shocks than in the standard simple ideal gas ones; duplication in the admissible pressure and temperature jumps for a given density jump is also obtained. The temperature jump for
a given pressure jump is much reduced compared to the simplest ideal gas case when the postshock
state is in one of those bands, the reason being that the incoming energy in the shock may be used to
ionize (or cause molecular dissociation in) the gas to a larger extent than in increasing its temperature.
Summarizing, we have derived a large number of thermodynamical and shock-related properties of
the gas described by the EOS of use in the Bifrost code exclusively on the basis of the temperature
and pressure maps as functions of density and internal energy. The results illuminate the behavior
of the gas in various regimes of interest for the calculations in the upper solar interior, photosphere,
chromosphere, transition region and corona. Particularly interesting results are obtained concerning
the properties of shocks. We expect that the current results can be of use to the community using the
Bifrost code in future.
2020-07-28T09:25:56Z
2020-07-28T09:25:56Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20663
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/61822021-11-05T09:02:19Zcom_915_668com_915_488col_915_678
Obtention of the Baryonic and Dark Matter power spectrum using certain approximations on the equations that rule the evolution of Baryonic Acoustic Oscillations
Armas Rillo, Sergio de
Betancort Rijo, Juan Eugenio
2017-09-26T08:45:47Z
2017-09-26T08:45:47Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/6182
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/157422021-11-05T09:02:28Zcom_915_668com_915_488col_915_678
Decoherence and entanglement in quantum physics
Medina Hernández, Jorge
Brouard Martín, Santiago
Decoherence
Master equation
Stochastic master equation
Throughout this project, the case of a composite system formed by a harmonic oscillator and a two-state atom will be studied; considering both affect each other as
a consequence of a Jaynes-Cummings interaction, and being both coupled to different reservoirs that are assumed to be independent from each other. The effects
of the decoherence in the evolution of the populations and coherences will be analyzed; specifically the energy dissipation and the coherence loss, as well as the stationary states and the relevance of the Jaynes-Cummings interaction. Finally, the
solution of stochastic equations will be analyzed, particularly the evolution of the
expectation value of the position operator for the harmonic oscilator; for the case of
simultaneous measurements in both subsystems, as well as for a measurement in
the harmonic oscillator when the two-state atom undergoes a decoherence process.
En este trabajo se estudiará un sistema compuesto por un oscilador armónico y un
átomo de dos niveles que presentan una interacción del tipo Jaynes-Cummings,
estando cada uno acoplado a un reservorio y asumiendo que éstos son independientes entre sí. Se analizarán los efectos de la decoherencia en la evolución de
poblaciones y coherencias; específicamente la disipación de energía y la pérdida de
coherencia, así como los estados estacionarios y la relevancia de la interacción de
Jaynes Cummings. Finalmente, se examinará la solución de ecuaciones estocásticas, concretamente la evolución del valor esperado de la posición para el oscilador
armónico, cuando se realiza una medida simultánea en el sistema de dos niveles;
y cuando únicamente se mide éste y el átomo de dos niveles experimenta decoherencia.
2019-07-26T10:50:47Z
2019-07-26T10:50:47Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15742
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/300212023-01-11T11:22:42Zcom_915_668com_915_488col_915_678
Introduction to the Bosonic String Theory
Rodríguez Sánchez, Paula
Gómez Llorente, José María
Grado en Física
In this work we analyze the main aspects related to the appearance and development of bosonic
string theory from an introductory point of view and with the knowledge obtained during the degree.
First, we present the tools that modern physics uses to solve problems, the Principle of least action,
and we work on some examples such as the action of the free particle with the aim of establishing
the knowledge for the resolution of the movement of the relativistic particle, and study later the
movement of the relativistic string. We carry out a brief review of the events that gave rise to
this theory of bosonic strings as well as other theories that emerged with the aim of unifying the
four forces: the gravity force, the electromagnetic force, the weak force and the strong force. We
continue the study looking for symmetries and conserved quantities that will significantly reduce the
complexity of the problem at hand. We carry out two types of quantization in our theory: canonical
and light cone quantization, and we obtain the mass spectrum for bosonic strings. Finally, we discuss
the current situation of string theory, the problems it has solved and the ones it intends to solve in
the future.
En la presente memoria analizamos los principales aspectos relacionados con la aparición y desarrollo de la teoría de cuerdas bosónicas desde un punto de vista introductorio y con los conocimientos
obtenidos durante el grado. Primero presentaremos las herramientas que utiliza la física moderna para la resolución de problemas, el Principio de mínima acción, y trabajaremos algunos ejemplos como
el de la acción de la partícula libre con el objetivo de asentar los conocimientos para la resolución
del movimiento de la partícula relativista, para posteriormente, estudiar el movimiento de la cuerda relativista. Llevaremos a cabo un breve repaso por los acontecimientos que dieron lugar a esta
teoría de cuerdas bosónicas así como otras teorías que surgieron con el objetivo de unificar las cuatro fuerzas: la fuerza de la gravedad, la fuerza electromagnética, la fuerza débil y la fuerza fuerte.
Continuamos el estudio buscando simetrías y cantidades conservadas que reducirán notablemente la
complejidad del problema que nos ocupa. Llevaremos a cabo dos tipos de cuantización en nuestra
teoría: la cuantización canónica y la del cono de luz, obtendremos el espectro de masas para cuerdas
bosónicas. Finalmente, discutimos la situación de la teoría de cuerdas actualmente, los problemas
que ha resuelto y los que pretende resolver en el futuro.
2022-09-29T10:40:19Z
2022-09-29T10:40:19Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/30021
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/106262021-11-05T09:02:21Zcom_915_668com_915_488col_915_678
Cálculo de órbitas pseudocirculares externas en el problema restringido de los tres cuerpos. Sistemas binarios
García Dorta, Luis
González Martínez-Pais, Ignacio
Física
2018-10-10T10:00:05Z
2018-10-10T10:00:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10626
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/284392022-11-15T11:51:06Zcom_915_668com_915_488col_915_678
Feasibility study of artificial intelligence techniques applied to the prediction of dust
Galván Fraile, Victor
Díaz González, Juan Pedro
González Fernández, Albano José
Grado En Física
This end of degree project constitutes an introduction to the application of
Machine Learning techniques on the prediction of meteorological variables,
concretely, aerosols. It presents a bibliographic review of the role on
atmospheric phenomena played by dust, including not only its main sources,
but the fundamental production mechanisms as well. Furthermore, it presents
a combination of theoretical concepts of Machine Learning algorithms,
mainly based on the guidelines of the book “Hands-on Machine Learning
with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques
to Build Intelligent Systems” [Ger19 ´ ], and on the courses of “Machine and
Deep Learning” of the University of Standford, taught online at Coursera
[Ng22]. The aim of this project is, therefore, to realize a first approach to
some of the basic algorithms of Machine Learning and put them into practice.
Particularly, after a preprocessing phase of the data, two models with different
artificial intelligence architectures were build up, training and testing them
with different periods. Furthermore, a study of the input variables and the
window sizes has been carried out in order to optimize the performance of
the models. Finally, several analysis of the results obtained from them have
been done, highlighting the strengths and weaknesses of each of them, in
addition to suggesting the basis for future projects in this field. Additionally,
and with the aim of increasing the transversality of this study, two dust
intrusion classifying models have been made, describing not only their main
characteristics, but also the results obtained and their possible improvements.
El presente trabajo de fin de grado constituye una introduccion a la ´
aplicacion de t ´ ecnicas de aprendizaje autom ´ atico para la predicci ´ on de ´
variables meteorologicas, concretamente, aerosoles. En ´ el, se presenta una ´
revision bibliogr ´ afica acerca del papel desempe ´ nado por el polvo en procesos ˜
atmosfericos, sus principales fuentes y los mecanismos de producci ´ on´
fundamentales. Ademas, se presenta una combinaci ´ on de conceptos te ´ oricos ´
sobre los que se fundamentan los algoritmos de inteligencia artificial, basados
principalmente en las directrices del libro “Hands-on Machine Learning with
Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to
Build Intelligent Systems” [Ger19 ´ ], as´ı como en los cursos de “Machine y
Deep Learning” de la Universidad de Standford, realizados online a traves de ´
Coursera [Ng22]. El objetivo del proyecto es, por tanto, realizar una primera
aproximacion a alguno de los algoritmos b ´ asicos de aprendizaje autom ´ atico ´
y ponerlos en practica. En particular, tras realizar un preprocesamiento de ´
los datos, se ha procedido a la creacion de dos modelos aplicando dos tipos ´
de algoritmos diferentes, que han sido entrenados y testeados en diferentes
periodos de tiempo. Asimismo, se ha realizado un estudio de las variables de
entrada y de las ventanas de datos, con el objetivo de optimizar el rendimiento
de los modelos. Finalmente, se ha realizado un analisis de los resultados ´
obtenidos, destacando las bondades y defectos de cada uno de los modelos
aplicados, ademas de sugerir las bases de futuros proyectos en este campo. ´
Adicionalmente, y con objeto de aumentar la transversalidad de este estudio,
se ha procedido a crear dos modelos clasificadores de las intrusiones de polvo,
describiendo sus principales caracter´ısticas, los resultados obtenidos as´ı como
las posibles mejoras de los mismos.
2022-06-28T11:20:33Z
2022-06-28T11:20:33Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/28439
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/250172021-11-05T09:02:02Zcom_915_668com_915_488col_915_678
Optimización de variables temporales y dosimétricas en algoritmos ultra-rápidos para estudios de gammagrafía ósea en un tomógrafo de emisión de fotón único (SPECT)
Gey Segade, Mateo Alberto
Catalán Acosta, Antonio
Hernandez Concepción, Ethel
Alonso Ramírez, Daniel
The following work has been made by a student with the purpose of getting to
know an area which is quite an unknown for the physics students in the ULL. An area
that is going through a great growth due to the investments done for supporting it and
that is demanding more specialists every year. This work shows the procedure and the
results of a study made with various objectives, but also serves as an introduction to the
medical physics and the nuclear medicine for those who do not know about these
subjects, so anyone can read and enjoy it.
Coming to the point, the main aim of the study is to ascertain in which manner
does the reduction of three parameters, two of them of the SPECT and the other one of
the Activity introduced into the Jaszczak Phantom, independently affect to the quality of
the resulting tomography. These parameters are the time in every stop of the SPECT’s
detectors, the angle of rotation of the detectors between each stop and the Activity of
radiopharmaceutical given to the patient before a SPECT. To achieve this goal, several
tests are done changing the parameters and obtaining different tomographys
corresponding to the various combinations of time, angle and dose of
radiopharmaceuticals. These tomographys are analyzed using the software ImageJ,
creating ROIs in the valuable areas and measuring the mean of the signal per pixel and
the standard deviation. With these measures, two parameters are calculated in order to
know about the resolution of the SPECT (FTC) and the quality of the results (SNR).
Then, by calculating another parameter relative to the tomographys’ quality (CNR) and
using the Rose criterion it is estimated whether or not the images are valid for the
clinical diagnosis and also how the diminution of the mentioned parameters affect the
results.
The other objective of the study is to verify the result of an article that states that
a SPECT can be reduced from 13 to 4 minutes without loosing its diagnostic capacity.
Since in the study shown in the article, they only argue they theory with the opinion of
seven doctor, it must be proven with empirical results in our study with the Rose
criterion.
The decrease in time per test, in addition to assuming an improvement in patient
comfort, could also improve the quality of the result since in a short test the patient
would be less likely to move. With a significant reduction in the duration of the
SPECTs, the number of tests that are carried out per day could also be increased and,
being this a diagnostic test, the fact of being able to reduce the dose of
radiopharmaceutical that is supplied to the patient will also be an advantage because of
the reduction of the risk of negative effects of the gamma radiation that is emitted inside
the body. A reduction in the dose used would also have as a consequence a decrease in
the economic costs of the SPECT.
2021-07-29T11:31:18Z
2021-07-29T11:31:18Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25017
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/30512021-11-05T09:02:08Zcom_915_668com_915_488col_915_678
Nanomateriales para la generación de luz blanca: obtención, estudio estructural y espectroscópico
Cantón Jara, Moisés
Castillo Vargas, Francisco Javier del
Yanes Hernández, Ángel Carlos
2016-09-02T09:40:24Z
2016-09-02T09:40:24Z
2016
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/3051
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/250142021-11-05T09:02:05Zcom_915_668com_915_488col_915_678
Estimación de parámetros fundamentales de enanas blancas mediante espectroscopía y fotometría en el visible
Reyes Rodríguez, Elena
Izquierdo Sánchez, Paula
Rodríguez Gil, Pablo
Enana blanca
Espectroscopía
Fotometría
En este trabajo se analizan los espectros observados y los datos fotométricos de un conjunto de
enanas blancas ricas en hidrógeno con el objetivo de determinar sus parámetros fundamentales tales
como la temperatura efectiva y la gravedad superficial, a través de dos métodos independientes. Se
han comparado los resultados obtenidos usando cada uno de ellos y contrastado con medidas ya
publicadas.
Las enanas blancas representan el punto final de la evolución estelar del 97% de las estrellas de
nuestra Galaxia, incluida nuestra estrella, el Sol. Son remanentes estelares densos, debido a que la
materia se encuentra en estado de plasma y no existe ningún impedimento por el cual los átomos no
puedan aproximarse entre sí. Los electrones se aproximan hasta que sus posiciones se limitan y la
presión viene dada por el principio de exclusión de Pauli: dos electrones (o más general, fermiones)
no pueden ocupar números cuánticos iguales. Como en el núcleo de las enanas blancas no tienen
lugar reacciones nucleares, es precisamente esta presión de degeneración de los electrones la que se
opone al colapso gravitatorio.
La estructura de estas estrellas degeneradas consiste principalmente en un núcleo de carbono y
oxígeno envuelto por una capa de helio, que a su vez suele estar rodeada por una capa de hidrógeno,
lo que generalmente impide la observación directa de la composición interior dada su gran
opacidad.
La medida de los parámetros fundamentales se ha llevado a cabo mediante el uso de dos métodos
implementados en Python y una malla de espectros sintéticos que cubre un determinado espacio de
temperatura efectiva y gravedad superficial. Se han buscado los modelos que mejor se ajustan a los
datos observados de doce enanas blancas con fotosferas ricas en hidrógeno (tipo espectral DA) a
través de una optimización por mínimos cuadrados. El método espectroscópico usa la anchura y la
profundidad de las líneas de absorción como elementos de diagnóstico, mientras que en el
fotométrico se emplean la distribución espectral de energía determinada a partir de un conjunto de
magnitudes en bandas anchas y la distancia a la fuente.
Los resultados de este análisis se corresponden a lo esperado: se encuentran temperaturas efectivas
entre 7300 y 22400 K y gravedades superficiales en torno a log g~8 dex. Además, se han
comparado con los resultados de un estudio de 230 candidatas a enanas blancas que contiene las de
la muestra analizada. Los valores medidos en este trabajo son perfectamente compatibles.
This project focuses on the analysis of the observed spectra and photometry of a set of hydrogenrich white dwarfs. We determined their fundamental parameters such as the effective temperature
and surface gravity, using two independent methods. We compare the results provided by each
method and contrast them with values previously published.
White dwarfs represent the end point of the evolution of 97% of the stars in our Galaxy, including
our star, the Sun. They are dense stellar remnants because matter is in a plasma state and there is no
reason why atoms can not approach each other. In fact, electrons get so close to each other that their
positions are limited and the pressure is dictated by the Pauli exclusion principle: two electrons (or more generally, fermions) cannot have identical quantum numbers. As nuclear reactions do not take
place in white dwarf nuclei, it is this electron degeneracy pressure that counteracts the gravitational
pull.
In brief, white dwarfs are made of a nucleus of carbon and oxygen surrounded by a layer of helium,
which in turn is usually surrounded by a hydrogen layer, which generally prevents direct
observation of the interior composition due to its large opacity.
The determination of fundamental parameters in this work has been carried out with two methods
implemented using Python and a grid of synthetic spectra. The best-fit models for a sample of
twelve hydrogen-rich white dwarfs were obtained by least squares optimization.
The spectroscopic method relies on the sensitivity of the absorption lines depth and width to the
effective temperature and surface gravity, while the photometric technique analyses the spectral
energy distribution constructed from a set of broad-band magnitudes and the known distance to the
source.
The results of this analysis are as expected: we find effective temperatures between 7300 and 22400
K and surface gravities around log g ~ 8 dex. In addition, these have been compared with the results
of a study of 230 white dwarf candidates that contains the stars in our sample. There is agreement
between the two works.
2021-07-29T11:30:55Z
2021-07-29T11:30:55Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25014
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/215422021-11-05T09:01:57Zcom_915_668com_915_488col_915_678
Quantum thermodynamics: a brief introduction to quantum thermal machines
Díaz Martín, Daniel
Alonso Ramírez, Daniel
El objetivo de traer al lector una revisi´on acad´emica simple del importante reciente campo
de investigaci´on de la Termodin´amica cu´antica es el eje central de este proyecto. Para
conseguirlo nos centramos en las m´aquinas t´ermicas cu´anticas. Una introducci´on a este
importante tema es presentado junto al concepto de m´aquinas t´ermicas endoreversibles. En
primer lugar, mostramos un estudio de su rendimiento motivado por un modelo cu´antico:
Un Maser de tres niveles junto a unos resultados muy interesantes de su comportamiento.
Posteriormente, con la idea de entender como funciona la din´amica de estos dispositivos
nos moveremos a la teor´ıa de sistemas cu´anticos abiertos mostrando la derivaci´on de una
herramienta matem´atica muy importante en el campo de la Termodin´amica cu´antica: La
ecuaci´on maestra Markoviana. Para finalizar, aplicaremos esta ecuaci´on a un sistema
espec´ıfico observando como la primera y segunda ley de la termodin´amica emergen de su
din´amica.
The purpose of bring the reader and academical and modest review of the emerging research
field of Quantum Thermodynamics is the central axis of this project. To do that we focus on
quantum thermal machines. An introduction to this important subject is addressed along with
the notion of endoreversible thermal machines. First, we present a study of thermal devices
performance utilizing a particular quantum model: The Three Level Maser, from which
rather general results can be derived. The idea is to understand in simple terms how the
dynamics of this type of systems works. Later, we will introduce some basic elements of the
theory of open quantum systems showing the derivation of an important mathematical tool
in Quantum Thermodynamics: The Markovian master equation ruling the reduce dynamics
of the system of interest. To finish, we apply this equation to an specific system and see how
the I-law and II-law of thermodynamics emerge in this context.
2020-10-06T10:30:59Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/21542
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/257392021-11-05T09:01:53Zcom_915_668com_915_488col_915_678
Microplastic analysis in sea urchins Diadema africanum via Raman Spectroscopy
Catalán Torralbo, Sergio
Ródenas Seguí, Airán
Microplastic
Raman
Spectroscopy
En este proyecto se ha estudiado la presencia de microplásticos en el interior de
erizos de mar Diadema africanum, recogidos en Tenerife, Canarias. Estos microplásticos
eran mayormente microfibras tanto transparentes como opacas con anchuras típicas de
5-10 µm. El objetivo era la identificación de la composición de un mínimo de un 10% del
total de las variadas microfibras encontradas en el tracto digestivo/intestinal y gónadas
de los erizos recogidos y analizados por el equipo de investigación de química del Dr.
Javier Borges (ULL). Para lograr nuestra meta, usamos un equipo comercial de microRaman, el sistema Renishaw InVia micro-Raman (de ahora en adelante µRaman). Para
poder obtener resultados satisfactorios, se desarrolló un protocolo de trabajo: (1) una
única configuración de medida optimizada para todas las microfibras, (2) un mismo
protocolo de tratamiento de espectros (sustracción de líneas base), y (3) la correlación
espectral con bases de datos ya existentes comerciales, así como con una base de
espectros propia, realizada mediante el mismo instrumento de medida y materiales
plásticos industriales estándar. Se asignaron como identificaciones positivas aquellas en
las que el coeficiente de correlación Pearson R fuese igual o superior a 0.7 (R2 ≥ 0.49).
Finalmente, tras el análisis espectral de 91 microfibras, 37 de ellas pudieron ser
identificadas composicionalmente, i.e. un ~40%. De entre las identificadas, encontramos
que un 47% de las fibras eran celulosa, un 24.3% de polipropileno (PP) y otro 24.3% de
polietilentereftalato, comúnmente llamado poliéster (PET). Además, se encontraron dos
copolímeros muy diferentes, una microfibra de cada tipo, poli(dimethylsiloxane-coalkylmethylsiloxane) y poly(1,4-cyclohexanedimethylene terephthalate-co-ethylene
terephthalate). Esta es la primera vez que se demuestra la presencia de microplásticos
en erizos de mar en la región Macaronésica y también en España.
In this project we studied the presence of microplastics inside Diadema
africanum sea urchins collected in Tenerife, Canary Islands. Our goal was to be able to
identify the composition of a minimum of a 10% of the total amount of microplastics
found inside digestive/intestinal tracts and gonads by the professor Javier Borges’
analytical chemistry research group. To achieve our goal, we used a micro-Raman
spectrograph, the Renishaw InVia micro-Raman (µRaman) system. To do it so, we
compared the spectra obtained when analysing the fibers with two different plastic
spectra libraries (one of them was made by us using the spectrograph), having then a
positive identification when the Pearson Correlation R between spectra was higher or
equal to a R=0.7 (R2=0.49). To make these correlations the Wire 4.0 software was used.
The Raman analysis showed that cellulosic were a 47% (17 out of 37) of the identified
fibers, PP a 24.3% and PET a 24.3%. Also, two copolymers were found, one fibre of each,
poly(dimethylsiloxane-co-alkylmethylsiloxane) and poly(1,4-cyclohexanedimethylene terephthalate-co-ethylene terephthalate). This is the first time the presence of microplastic is confirmed in sea urchin in the Macaronesia region and in Spain.
2021-10-22T09:46:18Z
2021-10-22T09:46:18Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25739
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/241062021-11-05T09:02:01Zcom_915_668com_915_488col_915_678
Materiales nanoestructurados para aplicaciones en dispositivos ópticos
Pérez González, Cristian
Yanes Hernández, Ángel Carlos
Castillo Vargas, Francisco Javier del
Nanostructured
optical
nanocrystals
In the present work entitled "Nanostructured materials for applications in optical
devices" different optically active materials have been obtained. First, rare-earth (RE3+)
doped nano-glass-ceramics (nGCs) comprising Sr2GdF7 nanocrystals (NCs) into a silica
matrix have been prepared by sol-gel method. Moreover, RE3+
-doped Sr2GdF7 NCs were
also synthesized by solvothermal method for comparison purposes. All the obtained
nanostructured materials were characterized, structural and spectroscopically for optical
applications.
Thus, un-doped nGCs, single doped with Ce3+, Eu3+, Sm3+, Dy3+ or Tb3+ and co doped with Ce3+
-Eu3+
, Ce3+
-Sm3+
, Ce3+
-Dy3+ and Ce3+
-Tb3+ nGCs were obtained by
adequate thermal treatment of the precursor sol-gel glasses. Eu3+, Sm3+, Dy3+ or Tb3+ ions
were selected as RE3+ dopants due to their characteristic emissions in the visible range.
Due to their low absorption coefficients, Gd3+ and/or Ce3+ ions are used as co-sensitizers,
due to their higher absorption capacities and efficient energy transfers (ET) towards the
activator ions to enhance the emissions of RE3+ ions.
The structural characterization was carried out by X-Ray diffraction (XRD),
transmission electron microscopy images (TEM) and energy dispersive X-Ray
spectroscopic (EDS) measurements, allowing us to study the crystalline structure, size
and distribution of the NCs in the nGCs and chemical composition. Thus, XRD confirmed
the precipitation of tetragonal Sr2GdF7 NCs with an average size around 8.4 nm. TEM
images shown the presence of spherical NCs with similar sizes to those obtained from
XRD patterns. Finally, EDS measurements confirm the presence of Sr, Gd and F as main
constituents of nanocrystalline environments with the expected stochiometric amounts,
i.e. (2:1:7) ascribed to Sr2GdF7.
On the other hand, the spectroscopic study was performed by emission and
excitation spectra along with lifetime measurements.
First, in the un-doped nGCs sharp and intense excitation and emission peaks
corresponding to Gd3+ ions are observed, at around 273 and 311 nm, respectively. This
emission overlapped with excitation peaks of Eu3+, Sm3+, Dy3+ or Tb3+ ions, which
suggest a possible energy transfer (ET) from Gd3+ to RE dopant ions. Thus, in the single-doped nGCs, by exciting Gd
3+ ions at 273 nm, besides to 311 nm emission peak of Gd3+
ions, corresponding emissions peaks of Eu3+, Sm3+, Dy3+ or Tb3+ ions are observed,
showing much more intense emissions than by direct excitation.
Accordingly, luminescence decay of Gd3+ ions in un-doped and single-doped nGCs
evidenced a reduction when co-doping with Eu3+, Sm3+, Dy3+ or Tb3+ ions, supporting the
ET from Gd3+ to these ions.
In the Ce3+ single-doped nGC, excitation and emission bands were observed in the
UV/UV-violet region, respectively. The emission band overlapped with excitation peaks
of Gd3+
, Eu3+, Sm3+, Dy3+ or Tb3+ ions, suggesting a possible ET to these ions.
For the co-doped nGCs, comprising Ce3+
-Eu3+
, Ce3+
-Sm3+
, Ce3+
-Dy3+ or Ce3+
-Tb3+
couples, visible emissions Ce3+ sensitized were observed in all cases. In particular, for
Ce3+
-Dy3+ and Ce3+
-Tb3+ co-doped nGCs, direct ET from Ce3+ ions Ce3+→( Dy3+/Tb3+)
and mediated through Gd3+ Ce3+→(Gd3+)n→(Dy3+/Tb3+), to these ions were observed.
However, for Ce3+
-Eu3+
, Ce3+
-Sm3+ co-doped nGCs, the metal-metal charge transfer
mechanism (MMCT) produces quenching of the Ce3+ and Eu3+/Sm3+ emissions, inhibit
direct ET from Ce3+ to these ions. In this case, ET from Ce3+ to activator ions only occurs
through Gd3+ ions by using the chain scheme Ce3+→(Gd3+)n→( Eu3+/Sm3+).
Finally, Sr2GdF7 solvothermal NCs, using Ce3+ and Eu3+ as dopants, where obtained
to compare their structure, luminescence and ET mechanisms previously studied. In
particular, Ce3+ single-doped and Ce3+
-Eu3+ co-doped NCs, shown similar structural and
luminescent properties to those previously observed in the corresponding nGCs.
Next, in order to verify the existence of the Gd3+ chain, Ce3+ single-doped Sr2GdF7
NCs were covered by a shell epitaxially grown of Eu3+ single-doped Sr2GdF7, giving rise
a Sr2GdF7:Ce3+@Sr2GdF7:Eu3+ core-shell system. By exciting Ce3+ ions from the core
NC, red emissions coming from Eu3+ ions located in the shell were observed, pointed out
the efficient mechanism of ET from the Ce3+ sensitizer to the Eu3+ activators, through
Gd3+ ions chain. Difference observed in the asymmetry ratio of Eu3+ emissions were
related with different environments for these ions and confirmed with a
Sr2GdF7:Ce3+@Sr2GdF7:Eu3+@Sr2YF7 core-shell-shell system, prepared for comparison
purposes.
2021-06-24T12:00:24Z
2021-06-24T12:00:24Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/24106
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/250202021-11-05T09:02:06Zcom_915_668com_915_488col_915_678
A study of the active galactic nucleus of NGC 4593 with adaptive optics assisted data from the integral field spectrograph MUSE
Sosa Guillén, Paula
Comerón Limbourg, Sébastien
Los N´ucleos Activos Gal´acticos son fen´omenos altamente energ´eticos que se producen en la regi´on
central de algunas galaxias masivas. Consisten en la acreci´on de materia hacia un agujero negro s´uper
masivo situado en la regi´on central. Este fen´omeno es crucial para entender la morfolog´ıa y evoluci´on
de las galaxias. Existen muchos tipos de galaxias que tienen esta caracter´ıstica. En concreto nos
interesan las tipo Seyfert 1 que se reconocen debido a la fuerte emisi´on en lineas espectrales anchas.
En el trabajo que nos ocupa se estudia la regi´on central de la galaxia NGC 4593, una galaxia de
tipo Seyfert 1. El cubo de datos usado para su estudio se toma con el espectr´ografo de campo integral
MUSE (instalado en el Very Large Telescope). Este instrumento permite acceder a una calidad muy
buena en la observaci´on astron´omica, en parte proporcionada por las cuatro estrellas de gu´ıa l´aser
que permiten corregir la turbulencia atmosf´erica, y a su gran campo de visi´on. NGC 4593 es una
galaxia tipo espiral caracterizada por tener dos anillos: uno interno (anillo nuclear) y otro mayor en
la zona mas externa de la galaxia. Adem´as, muestra lineas de emisi´on muy anchas en las lineas Hβ,
Hγ y Fe ii.
La metodolog´ıa usada para llevar a cabo el an´alisis se basa en el uso del software GIST, que nos
ha permitido realizar el tesselado Voronoi a partir de los p´ıxeles recogidos en un cubo de datos que
ha pasado un pre-procesado. Una vez se hace el tesselado Voronoi es posible proceder con el estudio
de la cinem´atica estelar con el programa pPXF. A partir del cual se obtiene el comportamiento de
las estrellas y se obtienen los mapas de velocidades. Para lograr un buen ajuste de los espectros se
han enmascarado las lineas de emisi´on de la zona central y las longitudes de onda asociadas al l´aser
utilizado por el telescopio. As´ı, se han podido obtener unos buenos resultados de la cinem´atica.
Con los datos recogidos por este estudio se ha estudiado la cinem´atica y la morfolog´ıa de la regi´on
central de la galaxia comprendida en el campo de visi´on (4.
005 × 4.
005). Se han expuesto los mapas
de velocidades y se han tratado debidamente. Adem´as, se ha estudiado la imagen de la galaxia,
obtenida a partir del cubo de datos, con el software DS9 permiti´endonos hacer un estudio del brillo
superficial. En ´ultima instancia se ha hecho, con un c´odigo de python, un graficado de la curva de
rotaci´on de la regi´on interna de NGC 4593.
Finalmente, se han extra´ıdo conclusiones con respecto a los resultados obtenidos durante el trabajo. En estos, se recogen los aspectos m´as relevantes tanto de las t´ecnicas de an´alisis de datos como
de la morfolog´ıa que presenta NGC 4593.
De esta forma el trabajo est´a estructurado en diferentes bloques en base a lo anteriormente
comentado: una introducci´on para situarnos en el contexto te´orico de lo que se trata, los objetivos
que se abordan en el trabajo, la metodolog´ıa aplicada para la obtenci´on de resultados (la l´ınea de
trabajo que se ha seguido y los programas usados), los resultados junto con la discusi´on de estos y,
por ´ultimo, la conclusi´on del trabajo.
2021-07-29T11:31:41Z
2021-07-29T11:31:41Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25020
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/162562021-11-05T09:01:48Zcom_915_668com_915_488col_915_678
Large scale structure of the Universe: statistics from cosmological simulations
Gómez Miguez, Martín Manuel
Dalla Vecchia, Claudio
Balaguera Antolínez, Andrés
Cosmology
Simulation
Statistics
La mejora de las técnicas observacionales ha permitido aumentar considerablemente
tanto el catálogo de observaciones como el conocimiento sobre las anisotropías del fondo
cósmico de microondas, favoreciendo el desarrollo de la cosmología. Con el objetivo
de reproducir la evolución de los fenómenos cosmológicos y predecir las observaciones
de futuras misiones científicas se realizan simulaciones de N-cuerpos en supercomputadores, debido al alto costo tanto en memoria come en tiempo computacional al
considerar un número elevado de partículas. En el caso práctico del presente trabajo, se
realiza una breve introducción al modelo cosmológico estándar así como a los parámetros que lo caracterizan y se realizan diferentes simulaciones cosmológicas empleando
el código PKDGRAV3 para un volumen cúbico de lado 100 Mpc/h, partiendo de un
redshift inicial z = 49 hasta tiempos actuales almacenando diez pasos temporales de
la distribución de partículas. Los objetos a estudiar serán partículas de materia oscura
cuya dinámica puede ser descrita por las ecuaciones de un fluido ideal de presión nula.
Para analizar el comportamiento estadístico de las distribuciones de materia obtenidas,
se desarrollará el formalismo teórico de la estadística a un punto y a dos puntos con el
fin de aplicarlo sobre dichos resultados.
En primer lugar, se implementarán dos de los criterios de asignación de masa más
comunes para interpolar la distribución de materia, de naturaleza discreta, a una malla
que divide el volumen total en celdas cúbicas. A continuación, se obtiene la función
de distribución de las fluctuaciones de la densidad de materia sobre la densidad crítica
usando como base los resultados de simulaciones de 1283 y 2003 partículas, comparando cómo la función filtro escogida afecta a la construcción de esta y las diferencias
observadas para ambas simulaciones. El siguiente punto será ajustar las mediciones al
modelo lognormal, el cual es el modelo analítico más sencillo para caracterizar comportamientos no lineales de una distribución estadística. Asimismo, se estudiará la
evolución de la distribución con el redshift a través del análisis de los cuatro primeros
momentos de esta, obteniéndose que, con el paso de tiempo, las zonas con sobredensidad de materia colapsan, concentrando la mayor parte de la materia y vaciando
progresivamente las demás regiones del Universo, lo que da lugar a la formación de
estructuras como halos de materia oscura.
En relación al análisis estadístico a dos puntos, se parte de la distribución discreta
de materia para obtener la función de correlación, la cual mide la probabilidad de
encontrar un par de objetos separados por una cierta distancia en comparación con
lo que se observaría para una distribución aleatoria. Para ello, se miden los histogramas DD (Data-Data), DR (Data-Random) y RR (Random-Random), los cuales dan
iv
pie a definir una serie de estimadores como los sugeridos por Peebles y Landy-Szalay,
ofreciendo resultados muy similares. La medición también es realizada para diferentes instantes temporales, haciéndose un estudio de la evolución de la influencia de
la gravedad en la formación de estructura a diferentes escalas en distintas épocas del
universo. El espectro de potencias, definido como la transformada de Fourier de la
función de correlación, se expresa en términos del módulo al cuadrado de la transformada de Fourier de la función de sobredensidad, lo que presenta una ventaja en tiempo
computacional con respecto a la función de correlación, por lo que es la función más
extendida para caracterizar la estadística a dos puntos. Debido a la naturaleza discreta
de la muestra estadística, las mediciones presentan un error conocido como ’shot-noise’
el cual se puede modelizar como Poissoniano, además de introducirse deformaciones en
la forma del espectro al emplear la implementación FFT (Fast Fourier Transform) debidas al criterio de asignación de masa y al efecto de aliasing, introducido al considerar
condiciones de frontera periódicas en virtud del Principio Cosmológico. La asignación
de masa se traducirá en una pérdida de potencia a bajas escalas en referencia a las
predicciones teóricas, más acentuada para CIC que para NGP y el aliasing producirá
un exceso de potencia cerca de la frecuencia de Nyquist debido a la influencia de los
modos de mayor frecuencia, aspecto que será tratado en detalle en el desarrollo teórico.
Para contrastar si los resultados de la simulación son los esperados y si la medición del
espectro se ha realizado adecuadamente, se siguen dos procedimientos. en primer lugar,
se mide el espectro de potencias empleando la suma directa, ofreciéndonos una medida
de este libre del efecto de asignación de masa, lo que permite corroborar la bondad de
las correccioens realizadas. A continuación, se emplea la herramienta online CAMB, la
cual resuelve las ecuaciones de Boltzmann, ofreciendo un espectro de potencias teórico
como solución, el cual podemos ajustar a los parámetros cosmológicos empleados por
PKDGRAV3 y que nos permite comprobar que los resultados de la simulación y los
teóricos coinciden. Finalmente, se cierra el proyecto con la discusión sobre la evolución
temporal del espectro de potencias con el tiempo.
2019-10-02T13:40:29Z
2019-10-02T13:40:29Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/16256
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/30522021-11-05T09:02:07Zcom_915_668com_915_488col_915_678
Medición de la velocidad de la luz
Mantero Castañeda, Eduardo Alberto
Roca Cortés, Teodoro
Astronomía y Astrofísica
Física solar
2016-09-02T09:40:28Z
2016-09-02T09:40:28Z
2016
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/3052
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/157282021-11-05T09:02:27Zcom_915_668com_915_488col_915_678
Anisotropy and lifetime decay of green fluorescent protein in glycerol
Brauner, Maren
Lahoz Zamarro, Fernando
Fluorescence is a phenomena that can be observed on the surface of tonic water in sunlight, perceiving the emission of blue light from a substance called
quinine. In 1845 Sir John F. W. Herschel [1] discovered the previous phenomena. Nowadays, fluorescence embodies a useful research tool in various
scientific fields such as biology, not only acting as a stain for diverse applications such as tracing cell organelles but also giving presumably important
information about its environment such as the viscosity of the medium. This
work is focused on the green fluorescent protein (GFP) with the objective to
measure its emission spectra, its lifetime and anisotropy decay. According to
the executed experiment, the result for the emission peak is, located in the
green wavelength range at λem = 510 nm. The lifetime decay also shows
the expected behavior, involving the effects that occur during the excitation
period. For the anisotropy decay measurements, the objective is to compare
the measuring results with different models for the three dimensional shape
of the GFP. Knowing a term called rotational correlation time θ from the experiment, the volume V of the protein can be determined. If the volume is
defined and θ is measured, the viscosity of the medium that surrounds the
GFP can be found. This part of the research was complicated as not all of the
obtained data followed the expectations, hence, both models and ideas how
to improve the procedure will be discussed theoretically in more detail.
2019-07-26T10:40:10Z
2019-07-26T10:40:10Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15728
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/257422021-11-05T09:01:52Zcom_915_668com_915_488col_915_678
Nanopartículas luminiscentes para aplicaciones en dispositivos ópticos. Obtención, caracterización estructural y estudio espectroscópico.
Medina Alayón, Francisco Miguel
Castillo Vargas, Francisco Javier del
Yanes Hernández, Ángel Carlos
Despite the apparent relationship of nanoparticles with modern science, it is known
that the first records of their use date back to the Middle Ages, where they were made
using primitive methods for the purpose of embellishing artistic representations. However,
it is not until today that the potential applications of nanostructured materials are being
used, covering scientific fields ranging from medicine, with the development of luminescent biocapsules for the treatment of some serious diseases, to the energy and ligthing
sectors, with the substantial improvement of the efficiency in photovoltaic panels and the
quality of illumination. In particular, if these materials are doped with rare earth ions,
important luminescent properties and photonic effects emerge from them, as is the case
of the up-conversion phenomenon, which can be especially interesting in photocatalytical applications, as for example, the hydrogen production through water-splitting or the
treatment of wastewater.
In the present work, different nanostructured materials based on N aY bF4 and doped
with ions of Eu3+ (5 % and 2 %) or Tm3+ (1 %) through different synthesis methods have
been elaborated, so that its advantages over traditional methods have been reaffirmed: on
the one hand, different nanocristal samples (NCs) were synthesized using the solvothermal
method, which involves organic solvents and surfactants and working with high pressures
and relativelly low temperatures through the use of an autoclave. On the other hand, several nano-glass-ceramics (nGCs) were produced using the sol-gel technique, which allows
to obtain nanostructured materials with optical quality, controlling their composition precisely and working at lower temperatures than conventional melt-quenching techniques.
Additionally, the importance to dope these materials with rare earths lies in the electronic configuration of those ions, since the phenomenon called lanthanide contraction
produces electronic mismatch effects that magnify their magnetic and optical properties
when interacting with visible, ultraviolet and infrared radiation.
The synthesized nanostructured materials were then subjected to structural characterization using X-ray diffraction (XRD), transmission electron microscopy (TEM-HRTEM),
and X-ray dispersive spectroscopy (EDS):
First, XRD measurements confirmed the formation of solvothermal cubic and hexagonal N aY bF4 NCs, with average sizes of 17 nm and 80 nm, respectively (calculated using
the Scherrer equation), showing the possibility to select the nanocrystalline phase in function of the selected heat treatment. Second, the images obtained through TEM-HRTEM
also confirmed the formation of those NCs, with spherical morphology and similar sizes
to the estimated from XRD measurements. Third, the EDS measurements confirmed the
existence of the expected chemical elements within the different samples. Moreover, it was
also confirmed the precipitation of N aY bF4 cubic NCs into the SiO2 matrix for all the
nGCs, with average sizes around 7 nm. Once the structural characterization was completed, a spectroscopic study was carried
out for the different nanostructured materials through different excitation and emission
spectra.
On the one hand, and with the purpose of complementing the structural characterization, an Eu3+ ion environment analysis was carried out, taking advantage of their
properties as a spectroscopic probe to determine the site symmetry of dopant RE3+ ions.
Emission spectra of the solvothermal NCs exciting at 393 nm, corresponding to the
transition 7F0 ⇒5 L6 showed that the ratio of the emissions associated with 5D0 ⇒7 F1
and 5D0 ⇒7 F2, known as the asymmetry ratio, present a value of R = 1.19 for cubic
NCs and R = 1.91 for hexagonal NCs, related with different symmetry sites occupied by
RE3+ ions. On the other hand, a R = 1.31 for the 95SiO2 − 5N aY bF4 : 2 %Eu3+ nGC
was obtained, suggesting an effective distribution of these ions into cubic N aY bF4 NCs.
Moreover, a R = 1.87 when exciting at 464 nm, suggest that some Eu3+ ions remain into
the glassy matrix.
On the other hand, intense UV up-conversion emissions were observed when exciting at 980 nm in cubic and hexagonal Tm3+ doped solvothermal NCs. The overall UC
emissions for the hexagonal NCs result more intense than the corresponding to the cubic
NCs, which can be related with different symmetry sites for the RE3+ ions. Moreover, corresponding intense up-conversion emissions for the Tm3+ doped nGC were also observed.
Furthermore, the proportionality of the intensity of the up-conversion emissions with
the laser pumping power was analyzed from logarithmic representations.Thus, the results
obtained showed saturation effects, since these values were lower than the expected theoretical ones. This phenomenon can be related to the competition between linear decay
and UC processes for the depletion of the intermediate excited levels, and in particular,
in all the studied materials, is related with high amount of Y b3+ ions.
Finally, and taking advantage of the observed intense UV up-conversion emissions,
a methylene blue photocatalysis experiment was carried out in order to evaluate potential industrial applications of these nanostructured materials. On one hand, degradation
curves of the methylene blue obtained from Tm3+ solvothermal NCs showed an important degradation, more intense in hexagonal NCs than in the cubic ones. On the other
hand, a degradation of 31 % was obtained for the Tm3+ doped nGC. These results suggest the possible application of these nanostructural materials in promising applications
in photocatalytic processes, such as the hydrogen generation through water-splitting or
the treatment of wastewater.
2021-10-22T09:46:47Z
2021-10-22T09:46:47Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25742
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/257292021-11-05T09:01:51Zcom_915_668com_915_488col_915_678
Dark Matter as Bose-Einstein condensates
García-Pérez Piñeiro, Julio
Delgado Borges, Vicente
En este trabajo se presenta un modelo para describir la materia oscura basado en los
condensados de Bose-Einstein. La materia oscura es un tipo de materia a´un desconocida
que no interacciona con la radiaci´on electromagn´etica (como la luz) y que corresponde aproximadamente al 80% de la materia del universo. El modelo m´as extendido para explicar la
materia oscura es el modelo de mater´ıa oscura fr´ıa (CDM), y aunque presenta resultados
exitosos, este modelo se enfrenta a varios problemas que inducen a buscar otro tipo de soluciones. Es aqu´ı donde entra el inter´es de este trabajo, buscar una alternativa basada en la
suposici´on de que la materia oscura esta compuesta por part´ıculas cu´anticas agregadas en
condensados de Bose-Einstein.
2021-10-22T09:20:26Z
2021-10-22T09:20:26Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25729
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/157402021-11-05T09:02:28Zcom_915_668com_915_488col_915_678
Formation and evolution of galaxies in cold dark matter cosmology
Arjona Gálvez, Elena
Dalla Vecchia, Claudio
Brook, Christopher Bryan
spiral galaxies
elliptical galaxies
lenticular galaxies
formation
evolution
host galaxy
satellite galaxy
massive galaxy
passive
star forming
El estudio de la evoluci´on y formaci´on de galaxias es una de las mayores l´ıneas de investigaci´on
abiertas en la astrof´ısica hoy en d´ıa. Entender c´omo se han formado, su morfolog´ıa y c´omo han
cambiado con el tiempo supone un gran paso para el conocimiento del universo actual. Existen
diversas teor´ıas que explican la evoluci´on de las galaxias, la m´as aceptada afirma que se debe, en
gran medida, a un proceso de fusi´on entre ellas. Estos procesos son tan violentos que, durante
la fusi´on, las estrellas y la materia oscura pertenecientes a cada galaxia se ven afectados por
un fuerte potencial gravitatorio, consiguiendo incluso que cambien su morfolog´ıa. As´ı, si dos
galaxias chocan entre ellas, se produce un proceso de fusi´on en el cual todo el movimiento orbital
existente anteriormente entre las estrellas de cada galaxia se convierte en un proceso aleatorio de
energ´ıa produciendo que estas estrellas, que antes de la fusi´on pose´ıan un movimiento ordenado,
pasen a orbitar de manera completamente aleatoria, lo que produce una galaxia de tipo el´ıptico.
Para explicar esto, [Ruiz et al., 2015] realiz´o, mediante datos recogidos observacionalmente,
un estudio sobre el n´umero de sat´elites existentes en galaxias de tipo masivo y la dependencia de
´estos con la morfolog´ıa de la galaxia. En este caso, encontr´o que, para un rango de masa entre
1011 M y 21011 M las galaxias el´ıpticas pose´ıan mayor n´umero de sat´elites seguidas de las
galaxias de tipo lenticular y finalmente, las galaxias de tipo espiral. Este hecho pone en acuerdo
la teor´ıa de la fusi´on de galaxias como el canal m´as probable de evoluci´on.
Nuestro trabajo consistir´a en reproducir el trabajo realizado por [Ruiz et al., 2015] mediante simulaciones de galaxias. Para ello, utilizaremos EAGLE, un conjunto de simulaciones
hidrodin´amicas desarrolladas por el Virgo Consortium. Dichas simulaciones, gracias a la enorme
variedad de procesos f´ısicos que incluyen, son una poderosa herramienta para los f´ısicos con la
que poder entender los distintos mecanismos f´ısicos que se producen en el universo.
Al principio del proyecto extraeremos de EAGLE todas las galaxias centrales, a redshift
cero, que pertenecen a un halo gal´actico con una masa en estrella mayor de 1011 M y, por
otra parte, las galaxias sat´elites de este halo con una masa en estrella mayor de 109 M .
Seguidamente, mediante diversos procesos se unir´an los sat´elites a sus galaxias centrales y
se separar´an estas ´ultimas en referencia a su morfolog´ıa. Tras esto, se realizar´a un estudio
minucionso de la dependencia del n´umero de sat´elites en relaci´on con la morfolog´ıa de la galaxia
central, primero con el rango de masa utilizado en [Ruiz et al., 2015] y luego con todas las galaxias centrales masivas extra´ıdas comparando as´ı los resultados observacionales con las simulaciones.
i
Nuestro trabajo est´a estructurado de la siguiente manera. En el cap´ıtulo 1 realizaremos un
planteamiento te´orico del tema a tratar en nuestro trabajo con el objetivo de introducirlo en
nuestro trabajo. Seguidamente, en el cap´ıtulo 2 se expondr´an brevemente los objetivos y la
metodolog´ıa seguida a lo largo de nuestro proyecto. En el cap´ıtulo 3 realizaremos una detallada
explicaci´on de EAGLE y su modo operandi. Por ´ultimo, en los cap´ıtulos 4 y 5 se expondr´an los
resultados y las conclusiones obtenidas as´ı como los posibles trabajos futuros.
2019-07-26T10:50:38Z
2019-07-26T10:50:38Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15740
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/300292023-01-11T09:46:30Zcom_915_668com_915_488col_915_678
Introducción a la espectropolarimetría solar
Alvarez Torres, Yasaira
Ruiz Cobo, Basilio
Grado en Física
Spectropolarimetry is a branch of astrophysics that focuses on the study of the Sun by
analyzing the polarized light that reaches us from it. This field of study has been very important
in improving our knowledge of the different solar structures, as well as their magnetic fields,
their different temperatures, among other characteristics.
Throughout this work we will talk about the different solar structures, giving special importance
to understanding how the solar magnetic field works, what exactly are the sunspots and what
are their main parts (umbra and penumbra) and what the granulation of the surface of the
sun consists of. Also, understanding theoretical concepts such as the Zeeman Effect and Stokes
Parameters, the foundations have been laid to carry out a practical study using data from
a satellite, the HINODE. These data obtained with spectropolarimetry techniques contain the
profiles of the Stokes parameters (I, U, V, Q) in pixels at different wavelengths, of a specific
area of the Sun, where two spectral lines of Fe I ( neutral iron).
With these data from the Stokes Parameters we will be able to represent in the first place the
different faces that make up the data cube for the intensity parameter I, the image that shows the
sunspot that will be the object of study being especially important. In addition, the spectrograms
of the X axes for the rest of the Stokes parameters have also been represented, as well as their
profiles at different points of the solar continuum.
Starting with the practical measurement process, in the first place, a calibration of the data has
been carried out using the spectrum of the FTS[7] atlas to convert our data to wavelengths.
Once this has been done and taking into account that some physical variables modify the spectra
of the Stokes Parameters, we have been able to carry out a series of measurements.
The first parameter measured was temperature, obtaining good results that account for the
notable difference in temperature between the umbra and the calm sun. For the calculation
of errors, the so-called Monte Carlo method has been used. Next, the values of the magnetic
intensity in the study region have been measured, having to separate the calculations according
to whether the criterion of strong field or weak field is fulfilled, obtaining as a result an image
that accounts for the field differences in the different areas of the sunspot, as well as in the calm
sun region. As expected, the results show that in the umbra area the magnetic field is much
more intense than in the calm sun region. In this case, the error propagation method has been
used. Finally, the velocities of the solar surface have been measured, obtaining as a result a
fluctuation of 2Km/s. To his error, the Monte Carlo Method has been used again..
Finally, three correlations have been made between the different parameters studied, in order
to understand more and draw better conclusions from the results. In the representation of
the magnetic field against the temperature, it is verified that the areas corresponding to the
umbra have a higher value of the magnetic field together with a much lower temperature, as
a consequence of the differences in the level of the surface material in that region. . In the
correlation between speed and temperature, where it is striking that the points are scattered on
the graph between 2 and -2 km/s. This is possibly caused by scattering from the five-minute
wobble of the solar surface, along with noise in the data. Finally. The correlation between
the magnetic field and the speed was made, where the 5-minute oscillation of the solar surface
becomes notable again.
Yasaira Alvarez Torres ´ 2
Introducci´on a la Espectropolarimetr´ıa Solar
Finally, to conclude the work, the most interesting phenomena that have been observed with
these traditional measurement methods have been mentioned, such as the Zeeman Effect, which
causes a splitting of the spectral lines, in this case, of Fe I. Said This splitting is what has
allowed us to measure the magnetic field in the study region, thus obtaining good results from
the umbra to the calm region. The 5-minute oscillation of the solar surface has also become
notable, which today continues to occupy an important place in research and, finally, we have
verified the phenomenon of convection, which is affecting the regions of the calm Sun, while in
the area of the sunspot this does not happen.
La espectropolarimetr´ıa es una rama de la astrof´ısica que se centra en el estudio del Sol
mediante el an´alisis de la luz polarizada que nos llega de ´el. Este ´ambito de estudio ha sido muy
importante a la hora de mejorar nuestros conocimientos sobre las diferentes estructuras solares,
as´ı como sobre sus campos magn´eticos, sus distintas temperaturas entre otras caracter´ısticas.
A lo largo de este trabajo hablaremos de las diferentes estructuras solares, prestando especial
importancia a entender c´omo funciona el campo magn´etico solar, que son exactamente las
manchas solares y cu´ales son sus principales partes (umbra y penumbra) y en que consiste la
granulaci´on de la superficie. Adem´as, entendiendo conceptos te´oricos como el Efecto Zeeman
y los Par´ametros de Stokes, se sentar´an las bases para poder llevar a cabo un estudio pr´actico
empleando los datos de un sat´elite, el HINODE. Dichos datos obtenidos con t´ecnicas de
espectropolarimetr´ıa contienen los perfiles de los par´ametros de Stokes (I, U, V, Q) en p´ıxeles a
diferentes longitudes de onda, de una zona concreta del Sol, donde se aprecian principalmente
dos l´ıneas espectrales del Fe I (hierro neutro).
Con esos datos de los Par´ametros de Stokes podremos representar en primer lugar las distintas
caras que conforman el cubo de datos para el par´ametro de la intensidad I, siendo especialmente
importante la imagen que muestra la mancha solar que ser´a objeto de estudio. Adem´as, se han
representado tambi´en los espectrogramas de los ejes X para el resto de par´ametros de Stokes,
as´ı como los perfiles de los mismos en diferentes puntos del continuo solar.
Ya comenzando con el proceso practico de medida, en primer lugar, se ha realizado una
calibraci´on de los datos empleando el espectro del atlas FTS para pasar nuestros datos a
longitudes de onda. Una vez realizado esto y teniendo en cuenta que algunas variables f´ısicas
modifican los espectros de los Par´ametros de Stokes, hemos podido llevar a cabo una serie de
medidas.
El primer par´ametro medido ha sido la temperatura, obteniendo buenos resultados que dan
cuenta de la notable diferencia de temperatura entre la umbra y el sol en calma. Para el c´alculo
de errores, se ha empleado el conocido como M´etodo de Montecarlo. A continuaci´on, se han
medido los valores de la intensidad magn´etica en la regi´on de estudio, teniendo que separar los
c´alculos seg´un se cumpla el criterio de campo fuerte o campo d´ebil, obteniendo como resultado
una imagen que da cuenta de las diferencias de campo en las distintas zonas de la mancha solar,
as´ı como en la regi´on de sol en calma. Tal como se esperaba, en los resultados se aprecia como
en la zona de la umbra el campo magn´etico es mucho m´as intenso que en la regi´on de sol en
calma. En este caso se ha empleado el m´etodo de propagaci´on de errores. Por ´ultimo, se ha
medido las velocidades de la superficie solar, obteniendo como resultado una fluctuaci´on de las
Yasaira Alvarez Torres ´ 3
Introducci´on a la Espectropolarimetr´ıa Solar
mismas de 2Km/s.
Por ultimo, se han realizado tres correlaciones entre los distintos par´ametros estudiados, para
poder comprender m´as y sacar mejores conclusiones de los resultados. En la representaci´on
del campo magn´etico frente a la temperatura se comprueba que las zonas correspondientes a
la umbra poseen un valor m´as alto del campo magn´etico junto con una temperatura mucho
m´as baja, como consecuencia de las diferencias de nivel del material de la superficie en esa
regi´on. En la correlaci´on entre la velocidad y la temperatura, donde llama la atenci´on que los
puntos est´an dispersados en la gr´afica entre 2 y -2 km/s. Esto posiblemente este causado por la
dispersi´on originada por la oscilaci´on de cinco minutos de la superficie solar, junto con el ruido
de los datos. Por ´ultimo. Se realiz´o la correlaci´on entre el campo magn´etico y la velocidad,
donde se vuelve a hacer notable la oscilaci´on de 5 minutos de la superficie solar.
Finalmente, para concluir el trabajo se han mencionado los fen´omenos m´as interesantes que
se han conseguido observar con estos m´etodos tradicionales de medida, tales como el Efecto
Zeeman, el cual provoca un desdoblamiento de las l´ıneas espectrales, en este caso, del Fe I.
Dicho desdoblamiento es el que nos ha permitido medir el campo magn´etico en la regi´on de
estudio, obteniendo as´ı buenos resultados desde la umbra hasta en la regi´on en calma. Tambi´en
se ha hecho notable la oscilaci´on de 5 minutos de la superficie solar, la cual a d´ıa de hoy sigue
ocupando un lugar importante en la investigaci´on y, por ´ultimo, hemos comprobado el fen´omeno
de convecci´on, el cual est´a afectando a las regiones de sol en calma, mientras que en la zona de
la mancha esto no sucede.
2022-09-29T10:41:35Z
2022-09-29T10:41:35Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/30029
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/206682021-11-05T09:01:56Zcom_915_668com_915_488col_915_678
Una galaxia E+A al microscopio: estudio detellado de SDSS J092006.43+015807.7 con MUSE
Fernandez Arroyo, Lucia
Monreal Ibero, Ana
Las galaxias E+A fueron descubiertas por Dressler and Gunn en 1983. Estas se caracterizan por
presentar flujos de absorción muy fuertes para las líneas de Balmer y también por la ausencia
de [O ii] en su espectro: en definitiva, no hay indicios de formación estelar en toda la estructura
galáctica. Aunque se consideran un elemento clave en el estudio de la evolución galáctica, no
existe mucha bibliografía sobre las E+A y tampoco está establecido el mecanismo mediante el
cual se llegan a formar, aunque existen dos hipótesis: por interracción galaxia-galaxia o por
"barrido por presión de arrastre" (ram-pressure stripping). En particular, para la galaxia que se
estudia en este trabajo, SDSSJ092006.44+ 015807.7, la única información disponible en el óptico
es una imagen en falso color y un espectro tomado en una apertura de 3" de diámetro de su
región central.
El cubo de datos tomado por el espetrógrafo de campo integral MUSE (colocado en el VLT,
Very Large Telescope) ha facilitado el estudio espectral de SDSS J092006.43+015807.7. Tras
obtener los espectros observados para 22 regiones repartidas en toda la galaxia, se han pasado
por el programa FADO para obtener la componente estelar modelada de los mismos. El espectro
resultante de restar la parte estelar de la modelada (es decir, la componente del gas) se ha usado
para medir los flujos integrados de varias líneas espectrales de las que hemos obtenido información a lo largo de todo el trabajo. Antes de poder interpretar los flujos, es necesario corregirlos
de atenuación, que es el efecto que tiene el polvo interestelar sobre la luz que nos llega de un
objeto. Para poder realizar esto, es imprescindible obtener el valor de atenuación de cada región.
En este caso, se empleó el decremento de Balmer para la atenuación del gas y los resultados
del modelado de FADO para la estelar. Por otra parte, se estudiaron diagramas de diagnóstico
(BPT) para discernir el origen de la ionización del gas para cada apertura estudiada. También
se ha estudiado la relación de las anchuras equivalentes del doblete Na D con la atenuación, así
como la velocidad relativa de cada región respecto del núcleo galáctico.
La estructura del trabajo se divide en varios bloques. En primer lugar, se presenta una
introducción a las galaxias E+A y a sus principales características espectrales, así como el instrumento usado (MUSE) para obtener el cubo de datos que se analizará en este proyecto.
También se adjuntan algunos de los parámetros concretos para la galaxia que se va a estudiar,
SDSS J092006.43+015807.7. En el segundo bloque se explica la metodología aplicada para la
obtención de resultados: es decir, las líneas de trabajo que se han seguido, así como los programas
usados. En particular, se resalta la creación de modelos para las líneas espectrales, clave para la
derivación de los parámetros a discutir. Por último, se analizan los resultados, comparándolos
con otros trabajos, ya sean sobre el mismo tipo de galaxia o de otras. Estos últimos se han
usado para comparar valores de galaxias más estudiadas con lo que hemos obtenido para SDSS
J092006.43+015807.7, y así resaltar el hecho de que las E+A son galaxias atípicas y de las que
no se dispone tanta información. Cabe destacar que se ha incluido el estudio de un factor relacionado con la atenuación que, hasta donde sabemos, no se había plasmado anteriormente en un
trabajo relacionado con las E+A.
Para finalizar el trabajo, realizamos una conclusión en la que se recopilan los principales resultados que hemos obtenido. La morfología de SDSS J092006.43+015807.7 y el tipo de rotación
que presenta, indican que esta galaxia surgió de un merger entre dos galaxias con distinta masa.
Además, el análisis de los diagramas de diagnóstico BPT apunta a que el núcleo está formado por un AGN, cuya influencia podría ser la causante de la terminación de formación estelar. La
gran cantidad de gas ionizado presente en la galaxia está fundamentada por los valores calculados para la atenuación tanto para el gas como para las estrellas y, además, el exceso de sodio
es indicativo de que el contenido de medio interestelar también es alto, aparte del gas ionizado.
Estos resultados son el fruto de un primer análisis de las características morfológicas y espectrales
de SDSS J092006.43+015807.7. Hemos añadido, para cerrar el estudio, una serie de líneas de
trabajo futuro que ayuden a afianzar lo que hemos concluido y que extiendan los conocimientos
sobre esta galaxia y otras E+A.
2020-07-28T09:41:19Z
2020-07-28T09:41:19Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20668
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/300262023-01-11T11:11:24Zcom_915_668com_915_488col_915_678
Espectropolarimetría en aproximación Milne-Eddington
Gómez Alayón, Jesús
Ruiz Cobo, Basilio
Grado en Física
Este trabajo de fin de grado comprende la construcción de un código de resolución, mediante las aproximaciones de Milne-Eddington, de la Ecuación de Transporte Radiactivo, así como los pasos introductorios del marco de trabajo empleado. Este marco de trabajo tiene como protagonista principal al Sol, estrella de tipo G de la secuencia principal, en particular nuestro estudio se centra en lograr una forma para tener vía de información de los sucesos que ocurren en la atmosfera del astro así como la composición del mismo. En este trabajo, la vía se conformara mediante la espectropolarimetría, esta consiste en medir
los espectros ópticos observados, y con ello, la longitud de onda y polarización de la luz proveniente del Sol. El campo magnético es la principal causa de la polarización de la luz y el responsable de los parámetros de Stokes (junto con el scattering, pero este efecto resulta de menor importancia). Para recuperar los perfiles debemos modelizar el campo
magnético de la atmósfera solar, de esta manera podemos revertir el proceso por el que pasan los fotones en la fotosfera del Sol.
Para poder llevar acabo la modelización de los cambios en la polarización de los fotones es necesario tener en cuenta, puesto que estamos hablando de campos magnéticos, el efecto Zeeman. Este efecto emerge de la existencia de un campo magnético, este se
manifiesta con la observación de las líneas espectrales al aparecer estas desdobladas en
varias componentes (distintas energías). La razón para ello es que las líneas espectrales
esperadas han sufrido un desdoblamiento en los niveles de energía, de este modo existe
mas de una transición de un nivel a otro que tiene como consecuencia la emisión de un
fotón a distinta energía de la esperada en las situaciones de una transición de un átomo a
un nivel inferior.
Otro efecto a tener en cuenta en esta modelización es el movimiento del plasma, puesto que es conocido que estas alteraciones pueden modificar también el espectro obtenido. ´La modelización de estos aspectos pasa por el desarrollo de la Ecuación de Transporte Radiactivo. Esta ecuación se encarga de darnos una visión adecuada de como los sucesos anteriormente mentados modifican las lecturas que nos llegan de la radiación proveniente del sol. Sin embargo, el comportamiento resulta enrevesado en gran medida debido a los acoplamientos de diversas magnitudes que son la causa principal de la complejidad del problema.
Lamentablemente la ETR no es generalmente resoluble de forma analítica, por ello es necesario el aproximarnos a las soluciones mediante distintos métodos, el utilizado en este documento es la aproximación Milne-Eddington. Esta aproximación permite tener 3
una solución analítica imponiendo descripciones concretas de los elementos que describen el entorno en la ecuación que tenemos gracias a Landi Degl’Innocenti.
En el texto se menciona brevemente contribuciones de Unno, Rachkovsky y Landi
Degl’Innocenti para posteriormente proseguir con la entrada en la modelización del problema, las aportaciones de Unno resultaron una buena primera aproximación que fue posteriormente refinada por las aportaciones de Rachkovsky, pero la ecuación final que tratamos es obra de Landi Degl’Innocenti. Sin embargo, resulta necesario conocer las tres
aproximaciones que tenemos en cuenta para poder llegar a los desarrollado por Landi
Degl’Innocenti: En primer lugar, las magnitudes relevantes con las que podemos definir la atmosfera solar las tomaremos constantes con la profundidad. En segundo lugar, la ´función fuente se tomará lineal con la profundidad óptica y, por último, impondremos el ´Equilibrio Termodinámico Local (ETL). Esta última aproximación no es realmente imprescindible pero hace la situación más fácil de interpretar físicamente, dado que en la situaciones en las que imponemos ETL la función fuente es simplemente la función de ´Planck, y por tanto dependiente solo de la temperatura y no de los propios parámetros de ´Stokes como ocurre en un caso general.
El código creado ha sido desarrollado mediante Python. El código es especialmente complejo en los puntos en los que resulta necesario calcular unas expresiones que están en ´función de una integral un número elevado de veces. Esta integral no está incorporada en el entorno python, por lo que se ha intentado calcular mediante aproximaciones numéricas.
Tras obtener las diferentes imágenes con una de las líneas del Fe I como parámetros
originales, los resultados obtenidos han sido puestos a prueba mediante variaciones de los
parámetros en rangos conocidos. Esto ha permitido evaluar si el comportamiento de las distintas gráficas casa con su dependencia de las fórmulas desarrolladas en los apartados anteriores.
Cabría preguntar en un trabajo de esta índole sobre los errores del procedimiento, pero
al tratarse del desarrollo de un código de síntesis en aproximación Milne-Eddington tenemos que en estas condiciones los perfiles espectrales obedecen a una fórmula analítica.
De esta manera no tienen error, mas allá de la precisión numérica de los parámetros involucrados.
2022-09-29T10:41:07Z
2022-09-29T10:41:07Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/30026
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/256952023-02-03T10:51:26Zcom_915_668com_915_488col_915_678
Quantum decoherence. A work on the quantum measurement problem
Rodr´ıguez Gonz´alez, Sergio
Brouard Martín, Santiago
The objectives of this project are several and their achievement has been organized in three different chapters. In the first one, It has been tried to explain what the measurement problem is and
how decoherence partially solves it, explaining why we are only able to measure non-superposed
states. In the second chapter it is shown how, indeed, decoherence emerges naturally when considering the interaction between a system and its environment, thus giving a justification to Von
Neumann’s irreversible reduction process. With these first two chapters, moreover, a mathematical framework based on master equations and CPT maps is established, which is used in the
last chapter to achieve the ultimate goal of being able to describe the evolution of the accessible
states of a quantum system on which a measurement is performed without having to resort to the
measurement postulates.
Los objetivos de este proyecto son varios y su consecuci´on se ha organizado en tres cap´ıtulos
distintos. En el primero se ha tratado de explicar en qu´e consiste el problema de la medida y
c´omo la decoherencia lo resuelve parcialmente, explicando por qu´e tan solo somos capaces de medir
estados no superpuestos. En el segundo cap´ıtulo se demuestra c´omo, en efecto, la decoherencia
emerge de forma natural cuando se considera la interacci´on entre un sistema y su entorno, dando
una justificaci´on as´ı al proceso de reducci´on irreversible de Von Neumann. Con estos dos primeros
cap´ıtulos, adem´as, se consigue establecer un marco matem´atico basado en las ecuaciones maestras
y los mapas CPT que, en el tercer cap´ıtulo, se emplea para lograr el objetivo final de poder describir
la evoluci´on de los estados accesibles de un sistema cu´antico sobre el que se realiza una medici´on
sin tener que recurrir a los postulados de la medida.
2021-10-19T08:04:11Z
2021-10-19T08:04:11Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25695
en
http://creativecommons.org/licenses/by-nc-nd/4.0/
info:eu-repo/semantics/openAccess
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
oai:riull.ull.es:915/257382023-01-18T14:07:38Zcom_915_668com_915_488col_915_678
Estudio de la incidencia de episodios de calima en Canarias mediante modelos climáticos globales
Herrera Cruz, Cristina
González Fernández, Albano José
Expósito González, Francisco Javier
Aerosoles atmosfericos
GCM
CMIP6
GFDL
MIROC
At present, the study of atmospheric aerosols has aroused great interest, especially in
places where, due to their geographical location, there are many invasions. One example
is the Canary Islands, which suffers episodes of desert dust from the African continent.
These episodes, known as calima, affect the radiative balance and cloud formation, as
well as influence human health and ecosystems.
The study of desert dust intrusions has evolved over the years thanks to advances in
observational methods and numerical models. In the present study, the potential of the
GCMs (Global Climate Models) of the new phase of CMIP (Coupled Model
Intercomparison Project), CMIP6, will be evaluated. For this purpose, the results of the
simulations of these models will be compared with the observations of recent past years.
In particular, the only three models that have made daily data on dust concentrations
available, i.e. IPSL, GFDL and MIROC6, will be used. These three models allow us the
analysis of aerosol transport and generation through simulations.
For the observations, the data studied are from MERRA version 2 (Modern-Era
Retrospective analysis for Research and Applications). They are obtained from the
reanalysis of space-based aerosol observations. For the simulated models and the
observations we worked with column dust concentrations (kg m
-2
) and in order to study
the incidence of calima episodes the data associated with a focused grid point in the
Canary Islands was chosen.
This study begins by studying the percentile associated with the concentration
corresponding to an atmospheric aerosol episode, i.e. the 60th percentile. Once the
percentile was determined, using the information on dust episodes provided by
Ministerio para la Transición Ecológica y el Reto Demográfico, the monthly mean
column concentration, the number of days above the 60th percentile and the number of
days above the 95th percentile were analysed for two periods: the historical period and
the future period. The historical period is from 1980 to 2009 and the future period is
divided into two, mid-century (2030-2059) and late century (2070-2099). In addition,
the SSP (Shared Socioeconomic Pathways) scenarios from CMIP6 describing CO2
concentrations in the future will be used for the future period.
So first of all, the monthly averages of dust column concentrations in the historical
period for the three CMIP6 models and for the MERRA2 measurements are compared,
which allows us to discard the IPSL model for future simulations, as its behaviour is
quite far from the observed one. Then the monthly averages of dust column
concentration in the future are analysed for the GFDL models in the SSP585 and
SSP245 scenarios and MIROC6 in the SSP126, SSP245, SSP370 and SSP585
scenarios. Since a general increase in the monthly mean dust column concentrations is observed, the number of days above the 60th and 95th percentile is studied to determine
whether this increase is due to the increased intensity of the episodes or the duration of
these intrusions.
For the historical period, the IPSL model does not represent the stationarity of the
observations and the number of days above the 60th percentile is much higher than for
the 95th percentile. Therefore, it can be said that the calima episodes were not too
intense in the past. For the future, an increase in the number of days with dust intrusions
is generally observed for the two selected CMIP6 models, which indicates that the
increase in the monthly mean is due to the longer duration of the dust intrusions and, to
a lesser extent, to the intensity. So, in order to obtain more information on this matter,
this work was finalised by studying the future trends for MIROC and GFDL.
A study of the trends in annual dust column concentrations shows a gradual increase,
which can be associated with more dust episodes as well as with an increase in intensity.
Consequently, the trend in the number of annual days of extreme events (95th
percentile) was analysed for both models and an increasing behaviour was observed.
However, although the increase in desert aerosol concentrations can be related to the
increase in the number of these episodes, a study of the average dust concentrations for
the events in each year has been carried out. From this study, which turned out not to be
statistically significant, it is possible to conclude that it cannot be considered an
important cause for the growth of dust concentrations.
Finally, it can be concluded, in first place, that the potential of the MIROC and GFDL
models is favourable and consequently they postulate to be good simulators for the
future. Furthermore, the future increase in frequency and intensity of desert dust
intrusions is evident, in particular for the worst-case scenario concerning CO2
concentrations. Therefore, under the initial conditions and assumptions proposed, this
work reflects the worsening of the calima episodes in the Canary Islands and stimulates
contributing to the slowing down of climate change.
2021-10-22T09:46:09Z
2021-10-22T09:46:09Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25738
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/157252021-11-05T09:02:32Zcom_915_668com_915_488col_915_678
Modelos semiclásicos en la representación de estados coherentes
García Martín, Imobach
Gómez Llorente, José María
We present a study of semiclassical approximations to quantum magnitudes in the
Bargmann representation. Firstly, a description of the WKB model in the coordinate
representation is given, including a quantitative validity condition for the approximation.
Subsequently, we define the coherent states and their main properties, as well as
the Bargmann states (non-normalized coherent states), which allows us to develop
the semiclassical approximations in the new representation. Both stationary and
time-dependent Schr¨odinger equations are solved, stablishing a comparison with the
results for the wavefunction coming from the coordinate representation formalism. The
energy spectrum is also studied.
As we will prove, the intrinsic complex structure of Bargmann representation leads us
to an extension to the complex plane, where the concept of analyticity plays a fundamental
role. Finally, we illustrate some advantages of the formalism we have introduced through
its application to a particular stationary system (the harmonic oscillator), achieving an
integral expression for the Hermite polynomials.
2019-07-26T10:20:05Z
2019-07-26T10:20:05Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15725
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/12662021-11-05T09:02:10Zcom_915_668com_915_488col_915_678
Aplicación de campos eléctricos sobre el crecimiento cristalino del Sulfato de Litio - Potasio y del Sulfato de Litio - Amonio : Influencia sobre sus propiedades polimórficas.
Ramos Hernández, Daniel
Torres Betancort, Manuel Eulalio
Física
Estructura cristalina
2015-10-13T10:10:04Z
2015-10-13T10:10:04Z
2015
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/1266
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/134812021-11-05T09:02:31Zcom_915_668com_915_488col_915_678
Estudio mecano-cuántico mediante primeros principios: propiedades estructurales y elásticas de la calcopirita AgGaTe2.
Del Castillo Hernández, Yelko
Muñoz González, Alfonso
Física
2019-03-29T11:00:05Z
2019-03-29T11:00:05Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/13481
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/61782021-11-05T09:02:16Zcom_915_668com_915_488col_915_678
Introducción a los ferroeléctricos cerámicos
Dorta Dorta, Víctor Manuel
González Silgo, Cristina
Torres Betancort, Manuel Eulalio
Física
2017-09-26T08:45:29Z
2017-09-26T08:45:29Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/6178
es
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/106072021-11-05T09:02:26Zcom_915_668com_915_488col_915_678
Evolution of metallicity gradients in Milky Way analogues using EAGLE simulations
Bordón Sánchez, Aridai
Brook, Christopher Bryan
Dalla Vecchia, Claudio
Física
2018-10-10T08:40:10Z
2018-10-10T08:40:10Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10607
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/200892021-11-05T09:01:54Zcom_915_668com_915_488col_915_678
Fluorescence of Molecules and Polymers
Hernández Álvarez, Christian
Lahoz Zamarro, Fernando
Fluorescence
Molecules
NBD Derivatives
Este trabajo se centra en el estudio de la fluorescencia de un conjunto de
moléculas orgánicas, en concreto de un grupo de derivados del Nitrobenzoxadiazol (NBD), y en la utilización de uno de ellos para generar un fármaco
con propiedades fluorescentes, para poder seguir la actuación del fármaco en
el interior de las células.
Para dicha selección realizaremos una pequeña exposición sobre los conceptos necesarios para entender lo que intentamos determinar en este trabajo,
como que es el fenómeno de fluorescencia, el Quantum Yield (QY), un fluoróforo, el espectro de absorción o emisión, entre otros.
Una vez expuestos esos conceptos necesarios se llevará acabo todo un
estudio del espectro de absorción y de emisión de dichas muestras para compararlos con los de una muestra estándar y de esta manera obtener su QY,
para ello se sigue una metodología experimental (bastante detallada en el
trabajo).Pero para determinar cuál de esos compuestos es el adecuado, se
analizan los datos obtenidos y una vez hecho, aquel que tenga el QY más
alto y con un margen de error muy reducido será el candidato adecuado para
unir con el fármaco.
A modo de conclusión, podría haber sido un éxito rotundo, si no hubiera
aparecido el Covid-19, pero aun así se pudo hacer una gran parte del estudio
que podría dar en el futuro a un posible nuevo trabajo mediante el desarrollo
de los apartados que no se pudieron llevar acabo y las distintas ideas de
propuesta como un estudio similar a los realizados para el FLTX1 (mirar apartado 3.4), pero usando el nuevo compuesto una vez sintetizado el fármaco
fluorescente.
2020-06-30T11:47:50Z
2020-06-30T11:47:50Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20089
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/96852021-11-05T09:02:23Zcom_915_668com_915_488col_915_678
Entanglement and decoherence in quantum physics
Avero Rodríguez, Sergio Javier
Brouard Martín, Santiago
Física
2018-07-23T10:30:05Z
2018-07-23T10:30:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9685
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/257282021-11-05T09:01:51Zcom_915_668com_915_488col_915_678
Estudio de la variación de la demanda energética en las Islas Canarias con el cambio climático
Orribo Morales, Juan
González Fernández, Albano José
Demanda Energética
Canarias
Cambio climático
In the study presented below, an analysis of the variability of the increase in energy demand in the
Canary Islands due to meteorological causes has been carried out. The seasonal and daily conditions of
energy demand in the period 2013-2019 have been studied. Then, some future estimations were made.
The variability of the energy demand has been studied through three standard indexes, monthly
seasonal variation index (MSVI), daily seasonal variation index (DSVI) and hourly seasonal variation
index (HSVI).
The results confirm the seasonal increase in energy demand in the long summers of all the Canary
Islands. This variability is greater in the non-capital islands, probably due to a greater flow of residents
during the summer period. The hourly and daily variability of the energy demand is also very
remarkable in all the islands, which has been reflected by the imposition of different quotas in several
time slots by the Spanish Government and the Electricity Companies. The energy consumed during
weekdays remains fairly constant throughout the week, decreasing on Saturday and, even more so, on
Sunday. Even in some islands, such as Fuerteventura or La Gomera, the drop is only really noticeable on
Sundays.
The direct relationship between the increase in demand and the increase in temperatures, through the
cooling degree-days, CDD, has been proven. This gives rise to quite disparate values, with particularities
being observed on each island. For this purpose, the comfort temperatures of the different islands have
been calculated, using a degree 3 polynomial fit of the energy demand data versus the average
temperature of each island, once demand time series have been detrended. Establishing in this way a
base level of energy consumption of cooling systems from which temperature changes involve an
increase in energy demand. Both by the use of refrigeration systems giving rise to the CDD, and heating
systems that give rise to the so-called Heating degree-days, HDD. The latter having a relatively lower
weight in the Canary Islands due to climatic conditions. The time series for the mean temperature of
each island were calculated from the ERA5 reanalysis data, averaging all those grid nodes that
correspond to land and that are below 1000 masl. In this way, the possible bias produced by the low
temperatures at higher elevations, which are not significant for the relationship with electricity
consumption, since it does not occur in those areas, is reduced.
After the analysis of the historical 2013-2019 period, the study has been extended to the analysis of
possible future scenarios of the evolution of electricity demand and its variability due to climate
change.
Regionalized climate projection data were provided by Grupo de Observación y la Atmósfera (GOTA),
that you belong to the Universidad de La Laguna (ULL). These projections were made using the WRF
mesoscale model and different boundary conditions provided by three global climate models
(GFDL-ESM2M, IPSL-CM5A-MR and MIROC-ESM) were used to simulate two future periods under study:
2030-59 and 2070-99. In addition, two possible socio-economic scenarios of greenhouse gas emissions,
the RCP4.5 (Representative Concentration Pathway) scenario, a more hopeful scenario, and the RCP8.5,
a more catastrophic one, were taken into account. They correspond to additional 4.5 and 8.5 W/m2
radiative forcings by 2100, respectively. Data from the WRF simulations, which have a much higher resolution, were aggregated to create a grid equivalent to that of the ERA5 and the same process was
applied to calculate the mean temperature time series for each island. Furthermore, a bias correction
method was applied to these time series, using the scaled distribution mapping (SDM) technique, which
outperforms previous methods based on quantile mapping and preserves raw climate model projected
changes to meteorological variables such as temperature and precipitation.
An increase in CDD was predicted in both cases, in the RCP4.5 scenario in a more moderate and
assumable way with a stabilization of the values. On the other hand, in the RCP8.5 scenario, the
increase in CDD is exponential and without stabilization, reaching values of almost 5 more CDD in the
summers of the 2070-2099 period. In addition, in the first three months of the year, when there are
currently very few days with non-zero CDD values, the energy demand for cooling in the future could be
significant. At the end of the century and in the least favorable scenario, the corresponding CDD could
take values between 2 and 5 during those first months.
For future work, it would be interesting to have more disaggregated data on energy demand; it would
be interesting to study the relationship between energy demand and temperature in different
socioeconomic environments: rural, residential areas, industrial or commercial areas, etc. In addition,
the use of projections of technological and socioeconomic evolution, that allow us to estimate a future
trend in the use of cooling systems and their efficiency, would make it possible to translate project
results, currently based on CDD, into estimates of future energy demand. This approximation would be
much more appropriate than simply assuming that energy uses and technologies remain unchanged.
En el estudio que se presenta, se ha hecho un análisis de variabilidad de la demanda energética en las
Islas Canarias por causas meteorológicas. Se han estudiado las condiciones estacionales y diarias de la
demanda energética en el periodo 2013-2019.
Se ha estudiado dicha variabilidad de la demanda a través de tres índices estándar, monthly seasonal
variation index (MSVI), daily seasonal variation index (DSVI) y hourly seasonal variation index (HSVI).
Obteniendo la confirmación del aumento de la demanda energética de manera estacional en los
veranos alargados de todas las Islas Canarias.
Asumiendo una relación directa entre el aumento de la demanda y el aumento de las temperaturas, a
través de los cooling degree-days , CDD, se han calculado las temperaturas de confort de las diferentes
islas. Estableciendo en esta un nivel base de consumo energético de los sistemas de refrigeración a
partir del cual los cambios de temperatura implican el aumento en la demanda energética.
Realizado un análisis en el histórico 2013-2019 se ha extendido el estudio al análisis de los posibles
escenarios futuros de la evolución de la demanda eléctrica y la variabilidad de estas por el cambio
climático.
Se han supuesto diferentes condiciones de contorno para la simulación de los datos de la temperatura
media diaria de los dos periodos futuros a estudio 2030-2059 y 2070-2099. Condiciones provenientes
de tres modelos climáticos globales de nivel internacional, los modelos GFDL-ESM2M,
IPSL-CM5A-MR y MIROC-ESM. Además se han usado datos basados en dos posibles escenarios
socioeconómicos de emisiones de gases de efecto invernadero, los escenarios RCP 4.5, un escenario
más esperanzador, y el escenario RCP 8.5, un escenario más catastrofista.
Se estima un aumento de los CDD en ambos casos, en el escenario RCP 4.5 de una manera más
moderada y asumible con una estabilización de los valores. En cambio en el escenario RCP 8.5 el
aumento de los CDD se produce de manera más abrupta y sin estabilización, alcanzando valores de casi 5 CDD más en los veranos del periodo 2070/99.
2021-10-22T09:20:16Z
2021-10-22T09:20:16Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25728
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/96102021-11-05T09:02:25Zcom_915_668com_915_488col_915_678
Nanoparticles doped with Yb3+ and Tm3+ ions used as an optical upconversion temperature sensor
Llanos Expósito, Marcos
Ríos Rodríguez, Susana
Física
2018-07-19T12:35:10Z
2018-07-19T12:35:10Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9610
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/157312021-11-05T09:02:30Zcom_915_668com_915_488col_915_678
Emisión láser en películas delgadas
Bueno Felipe, Sonia María
Lahoz Zamarro, Fernando
Luminiscencia
Emisión
ASE
In this work we make an approach to some spectroscopy related concepts such as luminescence, emission spectra, the characteristics of absorption and emission, laser and
amplified spontaneous emission. We address specifically the concept of fluorescence as
it is the phenomenon in which the current line of investigation of this work is based,
starting by giving a theoretical background in some basic aspects and then proceeding
to a more detailed study of our state-of-the-art samples, commenting the importance of
each one of the samples separately and focusing in the optical properties that make them
some of the most important components in the field. These samples are thin film composites with different active layers including the organic polymer poly[2-methoxy-5-(2’-
ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV), the organic semiconductor 5,6,11,12-
Tetraphenyltetracene (rubrene) and organometallic T iO2 perovskites. These material
have an extensive literature being one of the most popular topics due to their high charge
carrier mobility, astounding growth rates of their efficiencies, advantageous costs of production, among other issues.
We set the main objectives: the optical the characterization of the different materials,
and the analysis of various types of setups to try to obtain different types of emission like
laser, or amplified spontaneous emission.
We show the specifications of the samples of rubrene, perovskites and MEH-PPV;
to cover the characteristics of the thin film composites. Then we briefly talk about the
devices used for the optical characterization, mentioning some of their specifications and
proceed to show the set ups that we utilized for the absorption, emission, lifetimes, etc.
A theoretical background is given for a better comprehension of the results derived from
the optical characterization.
The experimental results are displayed and discussed separately for each one of the
materials. We study thoroughly the processes that take place, highlighting the particular peculiarities of each sample. For the rubrene samples the absorption spectrum was
analyzed for different samples and their lifetimes were studied according to the diagram
levels of the rubrene. No ASE emission was observed. For the perovskite based samples
we study the lifetime of compact and porous perovskites, giving also an analysis of their
emission spectra. For the MEH-PPV we start with the absorption and emission spectra
of different composites, no laser phenomenon was observed for the samples but narrow
emission peaks were observed, showing variable wavelength according to the angle of detection; as well as ASE, whose threshold power was studied for the different composites.
All the results were compared with those that can be found in the literature and delve
into the possible causes for ones that do not adjust to what we expect.
Finally we sum up the measurements we made and present the conclusions we reached
in this project.
2019-07-26T10:40:24Z
2019-07-26T10:40:24Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15731
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/96872021-11-05T09:02:22Zcom_915_668com_915_488col_915_678
Multidecadal temperature and salinity changes at 24.5ºN
Hatamoto Lázaro, Carla
Guerra García, Juan Carlos
Física
2018-07-23T10:40:04Z
2018-07-23T10:40:04Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9687
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/30602021-11-05T09:02:11Zcom_915_668com_915_488col_915_678
Usage of astronomical geodesy for millimetric ground deformation detection
González Álvarez, Itahisa
Arévalo Morales, María Jesús
Eff-Darwich Peña, Antonio Manuel
Geodesia
2016-09-20T07:18:54Z
2016-09-20T07:18:54Z
2014
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/3060
en
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/215482021-11-26T13:17:47Zcom_915_668com_915_488col_915_678
Estudio de la influencia de la resolución espacial en la simulación de fenómenos meteorológicos extremos en Tenerife
Suárez Bonilla, Àngel David
González Fernández, Albano José
Pérez Darias, Juan Carlos
Meteorología
Simulación
Modelos numéricos
The Canary Islands are characterized by having a mild and dry climate
throughout the year, however, rainfall is more abundant than expected due to the relief
of the islands. Furthermore, under specific meteorological circumstances, the
orography of the islands produces an amplifying effect, causing severe local
precipitation events. The importance of predicting these events is key for aeronautical
navigation, agriculture, sports and, especially, the prevention of natural disasters.
Specifically, this work will study the heavy rains that took place in Santa Cruz de
Tenerife in 2002 and caused floods throughout the capital, losses worth millions of
euros and the death of eight people. Furthermore, the AEMET was not able to predict
the magnitude of these precipitations, so the population was not alerted.
Likewise, the AEMET carried out a subsequent study in which they tried to
reproduce these precipitations by analyzing in depth the wind and pressure fields and
the convective structures. In addition, they carried out some simulations modifying the
resolution to try to improve the results. However, due to the complexity of the
phenomenon and the scarcity of data, they did not succeed.
In this work, the same day will be analyzed with the WRF model to try to
replicate these atmospheric conditions. For this, several simulations will be carried out
using different parameters to determine which set of them provides results closest to
the event that occurred on the island of Tenerife.
2020-10-06T10:46:16Z
2020-10-06T10:46:16Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/21548
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291032023-02-07T13:25:51Zcom_915_668com_915_488col_915_678
Initiation to the theoretical study of materials: DFT for C, Si, Ge and Sn
Monforte Marín, Álvaro
Radescu Cioranescu, Silvana Elena
Mújica Fernaud, Andrés
Grado En Física
The study of condensed matter is one of the main fields in modern Physics. A few
years ago, was divided into two streams: "hard" condensed matter physics, which studies
quantum properties of matter, and "soft" condensed matter physics which studies
those properties of matter for which quantum mechanics plays no role. Central
to this field is to understand how electrons and nuclei interact according to the wellestablished
laws of electromagnetism and quantum mechanics, and try to explain their
properties.
The complexity of the computational calculus is insanely huge. Nowadays it is normal
to understand this field as something collaborative between different groups of
researchers in order to have access, not only to more human resources but also to hardware
or software that allows in some way or another to reduce this computational cost.
The ab initio theories and calculations try to access physical-mathematical routes that
shorten these in an analytical way and also nourished by an empirical support to offer
the best possible approximations.
The present work will be focused in an introductory background of these ab initio
calculations that will support our understanding of how they will applied to study different
materials, in both diamond structures, cubic and hexagonal (called lonsdaleite)
for C (Carbon), Si (Silicon), Ge (Germanium) and Sn (Tin). The program used will be
VASP (Vienna ab initio Simulation Package). Convergence studies, Equations of States
with Birch-Murnaghan approximation, Density of States, band structure and phonon
frequencies will be studied.
The results obtained are consistent with current published calculations and theories,
thus confirming to the reproducibility and consistency of the results. Therefore, they
will be compared with publications extracted from different sources (the most used one
is Arxiv).
The diamond structure appears in different materials. It has beautiful optical properties
and a very high thermal conductivity (Carbon). Still, the hexagonal form of diamond
(it was observed for the first time in meteorite craters [1]), could now be produced
e.g. under shock compression experiments [2], and is significantly stiffer and
stronger than regular gem diamonds. The understanding of the differences between
them could give us a key for the next step in the discovering new materials.
2022-07-19T10:31:16Z
2022-07-19T10:31:16Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29103
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/223512021-11-05T09:01:59Zcom_915_668com_915_488col_915_678
Reacción de estado sólido en compuestos polimorfos tipo RE2(MoO4)3 monitorizados por termodifractrometría en un sincrotrón
Ramirez Rodriguez, Nivaria Rut
Torres Betancort, Manuel Eulalio
González Silgo, María Cristina
This work is focuses on the study of the solid-state synthesis and phase transitions of rare-earth
molybdates with formula 𝑅𝐸2
(𝑀𝑜𝑂4
)3 by X-Ray thermodiffraction, with powder samples. This
family of compounds is interesting because it consists of up to 10 different polymorphs. Thus, their
structural variety gives them important physical properties with interesting applications.
The stoichiometric mixture of the oxides 𝑀𝑜𝑂3 and 𝑅𝐸2𝑂3 where 𝑅𝐸 ≡ 𝑁𝑑, 𝑆𝑚, 𝐸𝑢 𝑦 𝐺𝑑 were
used as initial samples. Since this work is not completely experimental, we decided to introduce this
family of compounds, in detail, by plotting and describing the different crystal structures of the two
main polytypes: modulated scheelites, and ferroic phases. For this purpose, we made a bibliographic
tour from the fundamental to modern crystallography, defining concepts such as polytypes, crystalline
symmetry, space and superspace groups, modulated structures, among others.
The experiment was performed at the ESRF synchrotron (Grenoble, France); specifically, with the
Spanish beamline BM25-A, six year ago. The collected data have never been fully analysed, so, it has
been necessary to retrieve and review the experimental conditions and explain them in detail. While
we are reviewing the experimental data, we have decided to explain how a synchrotron works and its
advantages over a conventional X-ray tube. In addition, we also review some basic concepts of
diffraction for describing the diffraction by crystalline powder. Regarding the experimental conditions,
we distinguish between the first heating, in which the compounds with stoichiometry RE2Mo3O12 were
formed (from room temperature up to 900ºC), and the other cooling and heating cycles, to study the
phase transitions. The schedule followed for the Nd and Sm samples was similar and it consisted of
more cycles than for the Eu and Gd samples, as time in these experiments is limited. The heating and
cooling cycles were carefully plotted for each sample.
There were more than 100 diffractograms, so my role was to help identify and refine some of these
diffractograms, in particular the pure phases and the last cycles of the refinements. Before that, I had
to explain and distinguish between the phase identification and Le Bail refinement. To achieve the
phase identification, we had to plot most of the diffractograms and compare them with the simulated
diffractogram for each phase, whose crystal structure was obtained with the help of the ICSD database.
In addition, we obtained more quantitative results, such as the lattice parameters, with Le Bail
refinements of some selected phases identified at different temperatures. The most difficult work was
the identification and refinement of, we believe, all the non-stoichiometric crystalline phases (including
starting oxides) before the formation of the α-phase. Afterwards, it was easier to observe the α ⇔ β
transition at high temperature.
As we progressed in this work, we completed a phase diagram within this family of rare-earth
molybdates, studying the sequence and reversibility of the phase transitions. To do so, we have taken
into account the temperatures of each cycle and the ionic radii of the rare earths. Some of the phases
and transitions found had not been studied before, for example, the non-reversible transitions from the
β’ phase, obtained at room temperature by quenching (very fast cooling to freeze the crystalline
structure at ambient conditions), to the α-phase or the 𝐿𝑎2
(𝑀𝑜𝑂4
)3 phase, normally obtained by
cooling. Along the way we have found a possible phase mixture or an incommensurable phase for the
𝑁𝑑2
(𝑀𝑜𝑂4
)3, during the heating cycle, also from the β’ phase. In contrast, we have not studied the
better known β’ ⇔ β’ (ferroelectric-paraelectric) phase transition.
From the conclusions obtained we can carry out further refinements and evaluate the thermal
dependence of the lattice parameters, as well as publish a scientific paper based on this work.
The work has been divided into three chapters:
The first chapter was entitled: Introduction to molybdates with 𝑅𝐸2
(𝑀𝑜𝑂4
)3 stoichiometry, crystal
structures, polymorphisms and phase transitions. It reviews the state of the art, motivations, aims and
objectives of the work and explains the organisation of the work. The second section was devoted to
basic explanations of symmetry and direct lattice, crystal systems and crystal classes, space groups and
the reciprocal lattice. In the third section we described the different crystals with formula
𝑅𝑒2
(𝑀𝑜𝑂4
)3 divided into modulated scheelites (including the 𝐿𝑎2
(𝑀𝑜𝑂4
)3- and the α-phase) and
the ferroelectric-ferroelastic phase and paraelectric-paraelastic phase (i.e. the ferroic phases β and β’).
Finally, we described other possible molybdates with different RE/Mo ratios.
In the second chapter entitled: Diffraction techniques and experimental conditions. We focused on
synchrotron radiation: storage rings and synchrotron radiation sources, the properties of synchrotron
radiation and, in particular, the BM25 - X-Ray 'Beamline' from the Spanish CRG. Following
sections were dedicated to the X-ray diffraction and polycrystalline samples. From a schematic
diagram of a powder diffractometer, we give the experimental conditions of thermodiffraction
including heating-cooling schedules.
The third chapter is devoted to the analysis of the results and discussion. First, we explain how the
different phases can be identified from the experimental diffractograms. For this purpose, we modelled
the complete profile with the structural data obtained from the ICSD database and compare them with
experimental ones. Second, we explain the Le Bail least-squares refinement and the particular
strategies. We presented and discussed the results of the first heating cycle (𝑅𝑒2
(𝑀𝑜𝑂4
)3-phases
formation) and the subsequent cooling and heating cycles (thermal evolution and phase transitions).
We end this chapter with conclusions and possible future work.
Due to the very large and varied literature reviewed (more than fifty articles) and in order not to lose
the work done, we added an important part in the supplementary material. Here we include the most
complex descriptions, mathematical developments and some very specific definitions.
2021-02-25T10:00:44Z
2021-02-25T10:00:44Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/22351
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/290992022-11-17T11:50:26Zcom_915_668com_915_488col_915_678
3d-laser nanolithography in yag: Measurement of refractive index changes and design of a photonic crystal waveguide
Esquivel González, Marcos
Ródenas Seguí, Airán
Grado En Física
En el presente trabajo se realiza un estudio tanto experimental como computacional
de estructuras fotónicas. En el ´ámbito experimental, se estudia la mejora en el grabado
químico húmedo producida en cristal ´óptico de granate de itrio y aluminio (YAG)
para líneas trazadas con grosor sub-micrométrico mediante la técnica de escritura láser
3D (3DLW). Esta técnica utiliza un láser de pulsos de femtosegundos(∼150 fs) con
longitudes de onda en el rango NIR (∼800 nm). El tamaño de los poros vacíos obtenidos
por el grabado húmedo en zonas litografiadas, con anchuras de tamaño sub-micrométrico
y longitudes inferiores al milímetro, son ideales para la creación de estructuras fotónicas.
En este contexto, se ha podido analizar e identificar las configuraciones láser que
consiguen una mayor tasa de grabado en YAG en función de la velocidad de barrido,
la tasa de repetición de los pulsos láser y la energía de los pulsos. Asimismo, la
posibilidad de crear poros de aire a diferentes diámetros o con forma circular se ha
podido controlar cambiando la energía de pulso del láser. Por otro lado, gracias a una
técnica de medición de frente de onda (WFPI) desarrollada en Wooptix S.L., se ha podido
caracterizar por primera vez el orden del cambio de ´índice de refracción (10−2) producido
a escala sub-micrométrica en YAG debido al proceso 3DLW. Complementariamente, se
ha realizado un análisis numérico de una red fotónica hexagonal de nanoporos 2D en
YAG mediante software comercial (BandSOLVE, RSoft). De esta manera, se consigue
identificar la configuración con un bandgap fotónico (PBG) ´optimo, encontrado para
la luz que se propaga con su campo eléctrico dentro del plano. Finalmente, se han
investigado los posibles modos confinados para una guía de onda microestructurada
MOW, simplemente diseñada con una estructura hexagonal de revestimiento y un ´único
defecto de poro en el centro como núcleo.
In the present work, a study of photonic structures is carried out simultaneously
from an experimental and computational point of view. Experimentally, the yttrium
aluminium garnet (YAG) optical crystal has been used to study the wet-chemical
etching enhancement in sub-micron width line tracks written with the 3D laser writing
(3DLW) technique. This technique uses a femtosecond pulse laser (∼150 fs) in the NIR
espectral range (∼800 nm). The size of the hollow pores obtained by the wet-etching
in lithographed areas, with widths of sub-micron size and lengths in the sub-mm order,
are ideal for the creation of photonic structures. In this context, it has been possible to
analyse and identify the laser configurations that achieve a higher etching rate in this
crystal depending on scan speed, laser pulse repetition rate, and pulse energy. Also,
the possibility of creating air pores of different diameters or with a circular shape can
be controlled by changing the energy pulses of the laser. On the other hand, by means
of a wavefront phase imaging (WFPI) technique developed at Wooptix S.L., the order
of the refractive index change (10−2) produced at sub-micron scale in YAG due to the
3DLW process has been characterized for the first time. Complementarily, a numerical
study of a 2D nanopore hexagonal photonic lattice in YAG has been performed by
means of commercial software (BandSOLVE, RSoft). Thus, this allows to identify the
configuration with an optimal photonic bandgap, found to be for light propagating
with its electric field lying within the plane. Finally, possible confined modes for a
microstructured optical waveguide (MOW), simply designed with a cladding hexagonal
structure and a single pore defect in the centre as core, have also been investigated.
2022-07-19T10:30:45Z
2022-07-19T10:30:45Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29099
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/157322021-11-05T09:02:30Zcom_915_668com_915_488col_915_678
Estado de referencia inicial de un acelerador lineal de electrones de uso clínico.
Initial reference status of a clinical electron linear accelerator (LINAC)
Balani Mahtani, Vivek Vinod
Garrido Bretón, Carlos
Torres Betancort, Manuel Eulalio
Recientemente el desarrollo de la radiofísica y la física médica en España se ha visto apoyado por grandes inversiones, de forma que los hospitales
públicos han podido adquirir nuevos equipos para realizar tratamientos. Un
acelerador lineal de electrones (LINAC) es un sistema que permite eliminar
células cancerígenas mediante haces de fotones (o electrones) con altas energías. En el caso del nuevo acelerador Truebeam, estas energías son 6 MV
(con y sin filtro aplanador), 10 MV (sin filtro aplanador) y 18 MV (con
filtro aplanador).
Los parámetros que se medirán a la hora de determinar el estado de referencia inicial del acelerador se pueden dividir en dos tipos: los relativos a la
calidad del haz, definida como sinónimo de la energía del mismo (porcentaje
de dosis en profundidad y razón tejido maniquí); y relativos a la dosimetría
del campo de radiación (homogeneidad para los perfiles aplanados con filtro, pendiente para los perfiles sin aplanar, y simetría para los dos tipos de
perfiles).
Los resultados obtenidos relacionados al haz de 6 MV sin filtro se asemejan considerablemente a los que se reportan en la literatura, comparados
debido a que los haces sin filtro son una novedad en el Hospital Universitario.
Los resultados de los parámetros en relación al haz con filtro aplanador son
coherentes respecto a los valores que se obtienen en los distintos aceleradores
del propio Hospital.
Una vez realizadas las medidas, los valores obtenidos se guardarán en el
sistema monitor del acelerador, de forma que a la hora de realizar los controles de calidad mensuales, puedan realizarse las comprobaciones necesarias
para asegurar que el acelerador funciona correctamente, sin variaciones significativas de estos parámetros característicos de cada una de las energías del
haz.
2019-07-26T10:40:29Z
2019-07-26T10:40:29Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15732
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/146452021-11-05T09:02:31Zcom_915_668com_915_488col_915_678
Analysis of the influence of synoptic conditions on precipitation in the Canary Islands.
Esmorís Parga, Rocío
González Fernández, Albano José
Expósito González, Francisco Javier
Canary Islands
Rain
rainfall
precipitation
synoptic conditions
The present work analyzes the atmospheric synoptic conditions which mainly affect
rain episodes over the Canary Islands. The main aims are to assess the reliability of
two databases used to determine the weather in the Canary Islands and to study the
phenomenological distribution of rain episodes.
To achieve these aims is especially important to keep into account 3 specific features
of the Canary Islands. First, their particular location: close to the African continent
in a transition area from mild to tropical temperatures affected by the North Atlantic
Oscillation (NAO) and the Azores High. Second, the common weather conditions: the
archipelago is considered as a dry and very stable area, having over 50 raining episodes
per year on average. Third, its steep orography: altitude varies more than 3000m in less
than 20km horizontally.
After setting these features, a phenomenological classification is given. A total of 4
atmospheric disturbance phenomena are classified: Deep Atlantic Lows (DAL), Atlantic
Surface Lows (ASL), upper Atlantic Lows (UAL) and Troughs (TRO) are the considered
phenomena used to characterize the Canary Islands weather. The phenomena which are
not possible to include in any of these categories are included in No detection type (ND,
None).
Using some online resources, such as the AEMET database ARCIM´IS, and Meteo
Centre Reanalysis, a set of 104 cases of heavy rain (>30mm episodes) is analyzed to
better understand the particular situations in the atmosphere. Furthermore, this type
of analysis gives a reliable method to compare the further automatic classification of the
phenomena.
After that, the AEMET database is analyzed. This particular database shows the
distribution of heavy rain (> 30mm) and all the rain (> 1mm) in the Canary Islands.
These data are further used to compare the reliability of the numerical databases.
Then, Spread and WRF databases are analyzed. Maps of the distribution of the above
classification is shown for these two databases. First, 10 and 1mm maps, then, seasonal
maps. In this way, both databases are easily compared and furthermore, it is possible
to set which are the main phenomena affecting the Canary Islands and their particular
location.
Finally, as conclusions: the correspondence between these databases is exposed as well
the most important phenomena over the Canary Islands. The correspondence between
databases is particularly trustworthy. The most important phenomenon affecting the
Canary Islands is DAL and it is prominent during the winter.
En la presente memoria se pretende analizar las perturbaciones atmosf´ericas que dan
lugar a las precipitaciones m´as importantes en las Islas Canarias. Los objetivos principales
del trabajo son establecer la fiabilidad de las bases de datos para determinar los fen´omenos
de precipitaciones as´ı como estudiar la distribuci´on de los episodios de lluvia.
Para lograr estos objetivos es particularmente importante tener en cuenta tres caracter´ısticas de las Islas Canarias. Primero, su localizaci´on peculiar: cercanas al continente
africano en una zona de transici´on de temperaturas suaves a tropicales, afectadas por la
Oscilaci´on del Atl´antico Norte (NAO) y por el anticicl´on de las Azores. En segundo lugar, las condiciones clim´aticas generales: el Archipi´elago Canario est´a considerado como
un ´area seca y estable, con una media de 50 episodios de lluvia al a˜no. En tercer lugar, su abrupta orograf´ıa: se alcanzan alturas de m´as de 3000 m en menos de 20 km
horizontalmente.
Una vez se han establecido las caracter´ısticas anteriores, se proporciona una clasificaci´on fenomenol´ogica. Dicha clasificaci´on contiene 4 casos de perturbaciones atmosf´ericas:
bajas atl´anticas profundas (DAL), bajas atl´anticas en superficie (ASL), bajas atl´anticas
en altura (UAL) y vaguadas (TRO). Con estos fen´omenos se pretende caracterizar esta
situaci´on especial de precipitaciones en las Islas Canarias. Los episodios que no ha sido
posible incluir en ninguno de los anteriores fen´omenos se han incluido en la catergor´ıa de
ninguna detecci´on (ND, None).
Usando recursos en l´ınea tales como la base de datos ARCIM´IS de AEMET y Meteo
Centre Reanalysis, se analizan, con el fin de entender completamente las situaciones particulares de la atm´osfera para esos fen´omenos, un conjunto de 104 casos de lluvias extremas
(episodios de m´as de 30 mm en alg´un punto). Adem´as, este tipo de an´alisis proporciona
un m´etodo fiable para comparar la clasificaci´on autom´atica de los fen´omenos.
Despu´es, se analiza la base de datos de AEMET. Usando esta base se estudian las
distribuciones de lluvia extrema (> 30mm) y de la lluvia total (> 1mm). M´as tarde estos
datos se usan para comparar la fiabilidad de las otras dos bases de datos.
Luego, se analizan las bases de datos Spread y WRF. Se muestran mapas de estas dos
bases de datos donde se indican la distribuci´on de los fen´omenos clasificados. Primero se
analizan mapas de 10 y 1 mm y despu´es mapas por estaciones. De esta forma, se pueden
comparar de forma clara ambas bases de datos y adem´as es posible establecer cu´ales son
los fen´omenos que afectan principalmente a las Islas Canarias y d´onde est´an localizados.
Finalmente, a modo de conclusiones se establece que: primero, la correspondencia
entre las bases de datos es fidedigna. Segundo, el fen´omeno m´as importante durante los
episodios de lluvia es DAL y la estaci´on que deja m´as lluvias es el invierno.
2019-06-26T11:35:28Z
2019-06-26T11:35:28Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/14645
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/30442021-11-05T09:02:11Zcom_915_668com_915_488col_915_678
Study of the thermodynamic properties of solids through ab-initio methods: Estudio de las propiedades termodinámicas de sólidos mediante métodos ab-initio
Coello Rodríguez, Eduardo
Muñoz González, Alfonso
Rodríguez Hernández, Plácida
2016-09-02T09:30:05Z
2016-09-02T09:30:05Z
2016
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/3044
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/250242021-11-26T08:50:24Zcom_915_668com_915_488col_915_678
Fractal analysis of large-scale structures
Marrero De La Rosa, Carlos
Brook, Christopher Bryan
González Fernández, Albano José
Fractal
Large-Scale Structures
Cosmology
Con el avance de los tiempos se han ido definiendo estructuras o formas que ayudaran al ser
humano a comprender mejor su entorno, a aproximarlo de alguna manera a su entendimiento.
Es durante los siglos XIX-XX que aparece una nueva forma, lo que se pasar´ıa a llamar un fractal,
un objeto matem´atico cuya aparente irregularidad se repite a diferentes escalas. Un objeto que
no sigue la geometr´ıa de Euclides. Un objeto que, a pesar de estas curiosas caracter´ısticas, se
puede vislumbrar en las costas, en las hojas de helecho o en la espuma cu´antica. Hausdorff
plante´o una de las primeras definiciones de dimensi´on que se podr´ıa aplicar a un fractal, abriendo
la puerta al c´alculo de la dimensi´on fractal, que ser´a la piedra angular de este trabajo. Se puede
entender de muchas formas, pero la que mejor se adapta al inter´es de este trabajo es que la
dimensi´on fractal proporciona una idea de lo irregular que es una distribuci´on. De c´omo se
distribuyen los puntos que componen una estructura. Esto indica que puede dar informaci´on
sobre el agrupamiento de una distribuci´on.
En este trabajo se medir´a la dimensi´on fractal de las estructuras a gran escala del universo, a
fin de comprobar si siguen una distribuci´on homog´enea. Para ello se emplear´an datos provistos
por el conjunto de datos de grupos de galaxias BOSS (Baryon Oscillation Spectroscopic Survey)
que forma parte del SDSS (Sloan Digital Sky Survey). En concreto, se trabajar´a con los datos
conjuntos de los dos algoritmos de selecci´on de BOSS, para el casquete gal´actico norte: LOWZ,
que selecciona objetos hasta un redshift tal que z ≈ 0.4 y CMASS, que selecciona objetos en
un rango de 0.4 < z < 0.7. Este conjunto de ambos se denomina CMASSLOWZTOT North, y
proporciona datos de unos 953255 objetos.
El objetivo principal ser´a estudiar c´omo var´ıa la dimensi´on fractal de estas estructuras a gran
escala con la distancia com´ovil, y analizar si los resultados coinciden con aquellos indicados
en la literatura. Para lograr este objetivo se medir´a la dimensi´on fractal a trav´es de varios
m´etodos: algoritmos de box-counting, la funci´on de correlaci´on de dos puntos y la transformada
de Hankel del espectro de potencias.
En primer lugar, para realizar los an´alisis con los programas de box-counting, ser´a necesario
tener un mapa de la distribuci´on de los objetos en el cielo. Para ello se emplear´a la muestra
proporcionada por SDSS y, con el lenguaje de programaci´on Python, se dibujar´a este mapa de
distribuci´on. Los primeros m´etodos de box-counting que se emplear´an dividir´an este mapa en
peque˜nas cajas bidimensionales, donde solo se tendr´an en cuenta para el tratamiento aquellas
que tengan,al menos, un objeto en su interior. En uno de los m´etodos, las cajas no se
superpondr´an, sino que ser´an adyacentes unas con otras (m´etodo est´andar), y en el otro, las
muestras se superpondr´an entre s´ı (m´etodo gliding ; deslizante). Por otra parte, para el tercer
m´etodo de box-counting, se tendr´a en cuenta una tercera componente, ya que dividir´a el set de
datos en cubos. La tercera componente se dar´a poniendo el mapa de distribuci´on en escala de
grises, donde la escala de grises corresponder´a a la distancia com´ovil. De esta manera se tendr´a
una medici´on de la dimensi´on fractal a trav´es de tres m´etodos de box-counting.
Continuando con los algoritmos de box-counting, se realizar´a una medici´on del m´etodo est´andar
y del m´etodo de escala de grises formando el mapa del cielo con Healpix, que reproducir´a el cielo
en una superficie esf´erica dividida en p´ıxeles de ´areas iguales, permitiendo asi una representaci´on
m´as realista del cielo al seguir su geometr´ıa.
El siguiente paso corresponder´a a emplear la funci´on de correlaci´on de dos puntos para realizar
el c´alculo de la dimensi´on fractal. Se utilizar´a para calcular la funci´on de estructura, g(r) =
1 + ξ(r), su gradiente log-log (la funci´on de gradiente), γ(r) = dlog g(r)/dlog r, y la funci´on de dimensi´on fractal, D(r) = 3 + γ(r). En este caso, la funci´on de correlaci´on de dos puntos
se obtendr´a midi´endola directamente, utilizando conteo de pares. Se emplear´a para este fin el
estimador de Landy & Szalay. Una vez hecho esto, se proceder´a al c´alculo de la funci´on de
correlaci´on de dos puntos v´ıa transformada de Hankel del espectro de potencias, y se seguir´a el
mismo procedimiento anterior, es decir, calcular la funci´on de estructura, su gradiente log-log,
etc.
Una vez realizadas todas las mediciones para cada uno de los m´etodos, se encontrar´an los
resultados mostrados en la Tabla 0.
M´etodos SBC GBC GSBC HSBC HGSBC CF PS
Dimensi´on
Fractal
Media
1.01 ± 0.08 1.12 ±0.08 2.42 ± 0.11 1.78 ± 0.04 1.40 ± 0.11 2.25± 0.03 2.22 ± 0.05
Tabla 0: Resultados obtenidos para la dimensi´on fractal media en un intervalo de 300 a 2400 [M pc h−1
], para
cada uno de los m´etodos. El error se ha estimado como la desviaci´on est´andar de las medidas. Adem´as, las
siglas se refieren a: SBC- box-counting est´andar, GBC- box-counting deslizante, GSBC-box-counting en escala
de grises, HSBC- box-counting est´andar con Healpix, BGSBC- box-counting en escala de grises con Healpix,
CF - funci´on de correlaci´on, PS- espectro de potencias
Encontr´andose que, para todos los m´etodos, se obtiene un car´acter homog´eneo de la dimensi´on
fractal, aunque no se puede asegurar un ´unico valor, ya que difieren para cada m´etodo. Adem´as,
en la literatura se encuentra que en estas escalas D ≈ 3, luego el m´etodo que m´as se acerca
ser´ıa el que emplea escala de grises, aunque a´un estar´ıa lejos de esa cifra.
Se concluir´a que se prueba la homogeneidad de las estructruras a gran escala en los intervalos
analizados, aunque no con el mismo valor de la dimensi´on fractal dado por la literatura. A
su vez, se propondr´a un estudio m´as detallado para poder localizar la franja en la que se pasa
de un universo no homog´eneo a uno homog´eneo y, tambi´en, se propondr´a ahondar m´as en las
relaciones entre la geometr´ıa fractal y la cosmolog´ıa siguiendo los pasos de diversos estudios.
As´ı como tambi´en se propondr´a aumentar la escala en la que se han analizado los datos con el
fin de tratar de obtener un resultado m´as acorde con el mostrado en la literatura.
2021-07-29T11:45:19Z
2021-07-29T11:45:19Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25024
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/200542021-11-05T09:01:54Zcom_915_668com_915_488col_915_678
Estudio estadístico de la dirección de rotación en galaxias espirales
Pérez Martín, Adrián
Sánchez Menguiano, Laura
Ruiz Lara, Tomás
Rotación
Galaxias
Estadística
Humanity has always found certain interest in all that surrounds it, including the sky. This curiosity
caused a change from the myths to the use of the reasoning as a tool to understand the world. In the
XVII century, the curiosity for the skies expanded through the invention of the first telescopes and the
new observations these instruments allowed, opening a window towards a novel paradigm. The constant
development of the tools to observe the sky lead, among many others, to the discovery of galaxies by
Hubble in the XX century, along with different studies of these objects in said century.
Hubble himself classified this new kind of astronomical objects according to their morphology.
Among the most relevant features, the spiral structure displayed by some of them stands out. These also
called spiral galaxies present an ordered rotation as well. Both theoretical and observational studies seem
to agree in the fact that, in most galaxies, their spiral arms should point in the opposite direction of the
sense of rotation. As an statistical study about this topic has not been made since the 80s and today new
databases like CALIFA and DESI are available, it seems relevant to work on a new study on the orientation of spiral arms.
2020-06-30T10:31:18Z
2020-06-30T10:31:18Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20054
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/223502021-11-05T09:01:59Zcom_915_668com_915_488col_915_678
Evidencias de descomposición en multiferróicos RE2(MoO4)3 antes de PIA
González Correa, Eva
González Silgo, María Cristina
Torres Betancort, Manuel Eulalio
In this work we present the study of rare earth molybdates RE=(Eu, Tb and
Ho) with formula RE2(MoO4)3. Under ambient conditions, Eu- and Tb-containing
compounds can be found in the phases α and β0
, while holmium-containing can be
found in the phase γ and β0 under these conditions. In the study of the β0
-Tb2(MoO4)3
compression by the CCDD group, the hypothesis of a transition to a new phase, called
the phase δ, was considered. Subsequently, in the study of Y2(MoO4)3 synthesised
unconventional conditions, non-stoichiometric oxide and molybdate phases with dif fractograms very similar to those of the phase δ were obtained.
This opened the door to another possible hypothesis about the β0 → δ transition,
that it was a decomposition induced by high pressure. For its verification, the synt hesis of the named compounds was carried out, modifying the solid state synthesis,
applying a pressure of 0.66 GPa for compacting powder samples and increasing or
decreasing the synthesis temperature, which was different for each case.
We carry out a study of the crystalline structures of the most relevant phases
involved: chelite-α, β-β0 and γ. Additionally, symmetry relationships provide clarity
in understanding the phase transitions.
A routine diffractogram was performed for each compound synthesised by using
the X-ray diffractometer available at SIDIX (X-ray Facilitiy of the La Laguna Univer sity). Diffraction data collected under pressure were provided by the CCDD (group
with which the supervisors of this work are researching). Note that, in addition to
the β0
-Tb2(MoO4)3 data, β0
-phase of Ho and Eu molybdates data were also available.
Using the ICSD database we simulated the different phases that would be expec ted to be found, and with which a visual identification of the phases was performed.
Applying the Le Bail refinement method, the intensities of the full profile were refined
as a verification of the existence of the expected phases.
The synthesis of europium molybdate was carried out at 500oC, 550oC and
600oC. After the analysis and refinements, different mixtures of phases with struc tural types Sm2O3, MoO3, Eu4Mo7O27, Eu2Mo4O15 and the phase α-Eu2(MoO4)3
were detected. Furthermore, by analysing the pure β0
-Eu2(MoO4)3 phase under pres sure, it was observed that around 2.23 GPa a phase transition occurs, interpreted
as a decomposition into the β0
, Eu2O3 and Eu2Mo4O15 phases, while at 5 GPa an
amorphisation undergoes.
Holmium oxide, β0
-Ho2(MoO4)3, nonstoichiometric Y2Mo4O15 phases were iden tified for the holmium molybdated synthetised at 600oC. Under pressure, as in the
case of the europium molybdate, a phase transition occurs around 2.3 GPa in which
the β0
-phase, the europium oxide and the Y2Mo4O15 phase are involved and the
non-reversible amorphous phase starts at around 5 GPa.
In these cases as well as the one studied by the CCDD group on Tb molybdate,
when the decompression is carried out, the initial β0
-RE2(MoO4)3 phase is not com pletely recovered, so the phase transition is not reversible, which leads us to think
that it is a decomposition induced by pressure.
In addition to the experimental work and the analysis and discussion of the
results, we would like to highlight the literature review carried out, which is a very
important part of this dissertation. On the one hand, the research was contextualised,
its interest was explained and an exhaustive description of the crystal structures of the
materials studied was made. It was also necessary to: 1) review and introduce some
crystallographic terms that were later used in the description of these structures. 2)
Describe the experimental techniques used, reviewing their physical foundations. 3)
Explain the tools used in the analysis of the data. Therefore, the work was structured
as follows:
Chapter 1. Introduction. This is divided into three sections: the state of the art,
the motivations and how the work was organised.
Chapter 2. Crystallographic basis for the description of polymorphs with for mula RE2(MoO4)3. In the first section we explain concepts about crystal structures
and symmetry relations: direct lattice and symmetry, point and space groups and
group-subgroup relations. In the second section we describe the crystal structures
of the RE2(MoO4)3 family of compounds: the α, β0, γ and other non-stoichiometric
phases.
Chapter 3. Experimental preparation and diffraction. Here we explain how com pounds are prepared by solid state synthesis, giving details of the material, the equip ment, the stoichiometric calculations and the solid state reaction. The second section
is devoted to X-ray diffraction by powder samples. Diffraction concepts are introdu ced, the operation of powder diffractometers is explained and measurement conditions
at the SIDIX and DIAMOND synchrotron are given.
Chapter 4. Analysis of results and conclusions. The first section explains the
procedure followed for the identification of the phases, using databases, CIF files and
simulating the possible phases. The second section explains the refinement by the
Le Bail method. In the third section, the diffractograms measured in SIDIX (pha se identification and refinement) are analysed and discussed. In the fourth section,
the diffractograms measured in DIAMOND (phase identification and refinement) are
analysed and discussed. In the fifth section the conclusions are developed and in the
sixth section the possible continuation of this TFG is proposed.
2021-02-25T10:00:23Z
2021-02-25T10:00:23Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/22350
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/157332021-11-05T09:02:30Zcom_915_668com_915_488col_915_678
Implementation of small angle x-ray scattering tecnique for nanomaterials at servicio integrado de difracción de rayos x
Curbelo Cano, Zaida
González Platas, Javier
SAXS
2019-07-26T10:50:05Z
2019-07-26T10:50:05Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15733
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/157292021-11-05T09:02:29Zcom_915_668com_915_488col_915_678
Detección automática de meteoros en redes de cámaras comerciales de seguridad
Rodríguez Alarcón, Miguel
Serra Ricart, Miquel
meteoros
The recovery of meteorites is one of the most valuable resources to provide clues to
understand the origin, formation and composition of our Solar System. Thousands
of them hit the Earth’s atmosphere every day, but the vast majority disintegrate fastly during their path and do not reach the ground. In contact with the air
molecules and due to their high kinetic energies, they leave luminous trails in the
sky, called meteors, from which it is possible to calculate their trajectory, both
before entering, in their orbit, and later, when they impact. Having appropriate
instruments to detect them at its entry and being able to analyze all the data
obtained with them is necessary so, in case of impact, we can know the location
and origin of the meteorite. In this work, the architecture of the Fireball Alert
and Exploration Terrestial Observation Network (FAETON) meteor network has
been implemented, which uses commercial video surveillance cameras for detecting bright meteors. In addition, a software for processing the image sequences,
analyzing them and confirming the presence of meteors has been developed. Furthermore, different techniques currently used in Computer Vision have been introduced, with the intention of enlarging the state-of-the-art in the tracing and
characterization of its trajectory in images with geometric distortion. A total of
2824 possible detections have been analyzed using Neural Networks, reaching a
precision of 88.0 % in correctly classified meteors and a 4.6 % of false positives.
2019-07-26T10:40:15Z
2019-07-26T10:40:15Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15729
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/42742021-11-05T09:02:13Zcom_915_668com_915_488col_915_678
Global Optimization on Complex Systems
Díaz Pérez, Roberto
Hernández Rojas, Javier
2017-03-17T14:40:05Z
2017-03-17T14:40:05Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/4274
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/206612021-11-05T09:01:56Zcom_915_668com_915_488col_915_678
Global optimization in complex systems
Ivanov Kurtev, Kiril
Hernández Rojas, Javier
Global Optimization
Complex Systems
Basin - Hopping
La optimizaci´on global de sistemas complejos es un campo de gran inter´es en
distintas ramas de la ciencia. El objetivo de la optimizaci´on global es encontrar
el m´ınimo global de energ´ıa de un sistema, en la mayor´ıa de los casos basado
en un modelo no linear, en presencia de muchos m´ınimos locales. Esto es ´util
en campos como la econom´ıa, ciencias naturales o ingenier´ıa. En este trabajo
trabajaremos con agregados de gases nobles para encontrar la estructura m´as
probable a bajas temperaturas.
Para ello, usaremos un modelo de interacci´on entre los ´atomos basado en el
potencial de Lennard-Jones. La interacci´on entre todos los ´atomos dar´a lugar a
una superficie de energ´ıa potencial cuyo m´ınimo global est´a ´ıntimamente ligado
con la estructura m´as probable que formar´a el agregado.
Por otro lado, presentaremos los tres m´etodos m´as usados para resolver este
tipo de problemas, mostrando sus puntos fuertes y sus inconvenientes. Estos
algoritmos son: algoritmo gen´etico (GA), enfriamiento simulado (SA) y el algoritmo de basin-hopping (BH). Este trabajo se centrar´a en ´este ´utlimo, el BH.
Por ´ultimo, presentaremos los resultados obtenidos y discutiremos la utilidad
del trabajo, adem´as de proponer algunas mejoras para futura continuaci´on del proyecto.
2020-07-28T09:25:19Z
2020-07-28T09:25:19Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20661
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/162742021-11-05T09:01:50Zcom_915_668com_915_488col_915_678
Análisis de la influencia de las condiciones sinópticas sobre la precipitación en Canarias. Aproximación basada en análisis de componentes principales.
Álvarez Hernández, Aarón
González Fernández, Albano José
Expósito González, Francisco Javier
Meteorología
Tipos de tiempo
Componentes Principales
The rainfalls in the Canary Islands are very important for the social and economic life of their
population. However, the precipitation has different influences depending on the islands or the
zones: there are desert areas and very wet ones.
The Alisios winds have an important influence in the humidity of the islands, especially the
occidental ones, that generally have more relief than the rest. That is caused by the Azores
anticyclone, which has more impact in summer months. Nevertheless, this is not the principal
cause of the heavy rains in the archipelago: that is principally due to atmospheric disturbances
that destroy the stability that generate these winds.
In this project, the main objective is to characterize the impact of some weather types (WTs) in
the precipitation of the islands making use of two different databases, WRF and SPREAD, in
the period 1st January 1995 – 31st December 2004, which involves ten years of data. To achieve
this goal the Principal Components Analysis will be used to determine 4 regions or components,
also known as principal components (PC), using the precipitation values registered in those
databases. The Jones’s equations and rules will be also applied, which define a classification
method to identify different WTs depending on the pressure disturbances in a specific day.
To determine this last is necessary the sea level pressure values available in NCEP/NCAR
Reanalysis-1 database.
To apply the PCA, a region must be defined to make possible the calculation of the PC. The
chosen area is 27.025o N - 29.975o N, 13.025o W - 18.975o W (see figure 2). Each component has
an associated explained variance, which is related to the corresponding amount of information.
It is important to know that the calculated components will be rotated.
For the determination of the WTs, another region must be defined. Now, because of the spatial
resolution of NCEP/NCAR database, the zone is defined between the coordinates 20o N – 40o
N, 10o W – 25oW (see figure 3). With the information that will be extracted from the results
of applying the Jones’s method tables of values will be computed. The calculation of percentiles
will be also applied to determine what WTs are more important for intense precipitation in the
Canary Islands.
After finishing all this procedure, the results will be discussed. First of all, the WTs classification
has detected 1266 days of anticyclonic type (WT2), 1234 of directional types (WT1), 993 of
hybrid types (WT0), 74 days of cyclonic type (WT3) and 86 days undefined (U) (see figures 18
and 19). With that, the regions obtained applying the PCA will be analysed. To do this, the
percentile 95 will be studied, because it discriminates the light rains against the heavy ones:
• WRF database: (see figure 6 and table 13)
− W-PC1: this component is formed by Fuerteventura and Lanzarote. There, the WTs
that usually cause heavy rains are the west winds (23.67 % of the accumulated precipitation in the 10 analysed years), the cyclonic type (16.56 % of the accumulated
precipitation) and the northwest winds (16.41 % of the accumulated precipitation).
Other WTs that are less important are the east (14.47 % of the accumulated precipitation) and the northeast winds (10.14 % of the accumulated precipitation).
4
− W-PC2: this one is formed by the north-western islands of the archipelago, except
the northeast of Tenerife and La Palma. The most important WTs in this zone
are the cyclonic type (30.32 % of the accumulated heavy precipitation) and the west
winds (25.03 % of the accumulated precipitation). Also the east (12.86 % of the
accumulated precipitation) and northeast (10.38 % of the accumulated precipitation)
winds have influence.
− W-PC3: this component considers the island of Gran Canaria and a part of the coast
of Santa Cruz de Tenerife. In this region the cyclonic type is again very important
(27.55 % of the accumulated precipitation), and also the northeast (18.72 % of the
accumulated precipitation), east (18.28 % of the accumulated precipitation) and west
winds (17.45 % of the accumulated precipitation).
− W-PC4: the last component is formed by the northeast of La Palma and Tenerife, a region characterized by laurisilva that stands out because of its moisture. The
northeast winds are now the most important WT in the region (22.00 % of the accumulated precipitation). The cyclonic type (21.18 % of the accumulated precipitation)
and the west (16.40 % of the accumulated precipitation) and east winds (14.29 % of
the accumulated precipitation) are also important for the rains in the zone.
• SPREAD database: (see figure 13 and table 14)
− S-PC1: this region includes almost all Fuerteventura, the south of Gran Canaria
and some parts in the east of Tenerife. The WTs that cause the heaviest rains in this
zone are the west winds (34.15 % of the accumulated precipitation) and the cyclonic
type (33.01 % of the accumulated precipitation). Both WTs amount nearly two thirds
of the intense rains in the region. With less prominence are the east winds (11.55 %
of the accumulated precipitation).
− S-PC2: this component is formed by the north-western islands, except the north
and northeast of Tenerife, and the west of Gran Canaria. There, the most important WTs are the cyclonic type (35.86 % of the accumulated precipitation) and the
west winds (28.27 % of the accumulated precipitation). The southwest winds have
also importance (9.82 % of the accumulated precipitation), but much less than the
aforementioned WTs.
− S-PC3: this component considers Lanzarote and the north of Fuerteventura. Now,
the west winds are so important (43.45 % of the accumulated precipitation), practically the double of the contribution of the cyclonic type (22.63 % of the accumulated
precipitation). The east winds have less importance (9.98 % of the accumulate precipitation).
− S-PC4: this region includes the north of the capital islands. The most important
WTs are the cyclonic type (23.62 % of the accumulated precipitation) and the northwest (18.41 % of the accumulated precipitation) and west winds (15.81 % of the accumulated precipitation). The northeast (14.60 % of the accumulated precipitation)
and north winds (10.36 % of the accumulated precipitation) are important too, but
to a lesser extent.
5
The regions W-PC2 and S-PC1 are quite similar, and also the components W-PC4 and S-PC4
are slightly similar. Also, for both databases the cyclonic type is very important for the heavy
rains in the archipelago, and the west and east winds too. In the other hand, the anticyclonic
type hardly ever is the cause of intense precipitations in the Canary Islands, but the south and
southeast winds have less importance than it. Finally, the northeast winds are important in the
entire components calculated using WRF database, but if the SPREAD database is used, these
winds have more importance in the north of the capital islands.
2019-10-03T09:15:13Z
2019-10-03T09:15:13Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/16274
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/162542021-11-05T09:01:47Zcom_915_668com_915_488col_915_678
Introduction to string theory
Ferrera González, Carlos
Gómez Llorente, José María
El siguiente trabajo comienza, en su primer cap´ıtulo, tratando de enfatizar por qu´e se ha desarrollado la Teor´ıa
de Cuerdas, explicando a qu´e cuestiones busca dar soluci´on, para ello se exponen una serie de ideas que giran
entorno a la b´usqueda hist´orica de la unificaci´on en la f´ısica. Tras esta breve discusi´on, se repasa en el siguiente
cap´ıtulo la cuerda cl´asica, dado que es el elemento del que se derivan e inspiran modelos m´as complejos que se
tratan en sucesivos cap´ıtulos, precisamente se utiliza la misma para ilustrar c´omo se aplican los conceptos de
acci´on y mec´anica lagrangiana al estudio de campos, y dejar as´ı patente la efectividad de estas herramientas.
Tambi´en se derivan, a modo de ejemplo, una serie de expresiones correspondientes a campos complejos tales como
la ecuaciones de Schr¨odinger y Dirac, con ello se persigue realzar el poder de la acci´on en la f´ısica matem´atica.
En el cap´ıtulo 4 se estudia la part´ıcula puntual relativista, aqu´ı nos detenemos para repasar las bases de la
relatividad especial, explicar las coordenadas del cono de luz y comprender la compactificaci´on como v´ıa para la
posibilidad de dimensiones adicionales. Tras ello, se expone como estudiar la part´ıcula relativista empezando por
la propuesta de una acci´on, desgranando a partir de la misma las ecuaciones de movimiento correspondientes.
Posteriormente, en el cap´ıtulo 6, se utiliza este modelo para mostrar c´omo se realiza el proceso de cuantizaci´on
de una teor´ıa no cu´antica. Para ello, se localizan las variables din´amicas del sistema y se transforman en los
consiguientes operadores, se obtienen las relaciones de conmutaci´on, se define un hamiltoniano del que se pone
a prueba su validez y se construyen el espacio de estados y la ecuaci´on de Schr¨odinger de la part´ıcula cu´antica
libre.
En paralelo, se comienza a estudiar la cuerda relativista. Mediante la hoja del mundo generada por la cuerda,
que es una idea basada en la l´ınea del universo de la part´ıcula puntual, se justifica la acci´on de Nambu-Goto,
esta acci´on pasa a ser la base la base de nuestro modelo. A partir de esta y de las condiciones de contorno
que imponemos a los extremos de nuestra cuerda, derivamos su ecuaci´on de ondas, la forma de su energ´ıa
potencial, obtenemos que los extremos de las cuerdas libres se mueven transversalmente a la velocidad de la luz
y evaluamos las leyes de conservaci´on de la misma. Con esta informaci´on desarrollamos la soluci´on de la ecuaci´on
de movimiento, que luego, sin llegar a demostrar expl´ı citamente, presentamos en el formalismo del cono de luz,
definiendo en el proceso los modos transversales de Visasoro. Finalmente cuantizamos la cuerda relativista
siguiendo un procedimiento similar al de la part´ıcula puntual, y discutimos como la primera se convierte en
un oscilador arm´onico, cuyos modos de vibraci´on marcan la diferencia con segunda. Por ´ultimo presentamos,
sin ´animo de demostrar expl´ıcitamente, como la Teor´ıa de Cuerdas bos´onica predice la existencia de hasta 26
dimensiones, es decir, 22 dimensiones adicionales.
2019-10-02T13:40:10Z
2019-10-02T13:40:10Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/16254
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291022022-11-17T12:27:13Zcom_915_668com_915_488col_915_678
Búsqueda fotométrica de variables cataclísmicas adecuadas para estudios dinámicos
Martín Escabia, Alejandro
Rodríguez Gil, Pablo
Pérez Torres, Manuel
Grado En Física
En este trabajo presentamos un método rápido diseñado para detectar la estrella secundaria en el visible/infrarrojo en variables cataclísmicas a partir de sus
distribuciones de energía espectral (SEDs) calculadas a partir de las magnitudes
y distancias publicadas por diferentes mapeos (surveys) de todo el cielo. Nuestra muestra contiene novas de nuestra Galaxia cuyas erupciones ocurrieron hace
más de 50 años y que a día de hoy se encuentran en estado de quietud. La detección de la estrella secundaria permitirá realizar estudios dinámicos que permitan
respaldar o refutar los modelos teóricos que deducen la masa de la enana blanca
–––en muchas ocasiones superior a 1 M⊙— a partir de la curva de decaimiento
de brillo de la nova.
Hemos implementado una serie de códigos de PYTHON que permiten automatizar el proceso de generación de las SEDs. De los 44 sistemas estudiados, hemos
observado indicios de la presencia de la estrella secundaria en 12 novas no simbióticas. Los resultados obtenidos han sido comparados con estudios espectroscópicos de la literatura, que han corroborado la detección certera de la secundaria
en tres casos (GK Per, V841 Oph y BD Pav). Además, hemos identificado en estos
trabajos previos otros candidatos que necesitan espectros de mejor calidad para
dar una respuesta definitiva.
Cataclysmic variables are semi-detached binary systems composed of a white dwarf primary star and a low mass, typically main sequence, secondary star.
The latter overfills its Roche lobe and transfers matter into the Roche lobe of the
white dwarf. The way this mass is driven on to the primary star will depend on
its magnetic properties. For weakly-magnetic white dwarfs, the stream of transferred material, which has a non-zero angular momentum relative to the white
dwarf, goes into orbit around it eventually forming an accretion disc. If, on the
contrary, the white dwarf is strongly magnetic, the formation of an accretion disc
is not possible and the material is channelled along the magnetic field lines of the
primary onto its surface to end up colliding near its magnetic poles.
Cataclysmic variables are given their exotic name as a consequence of the characteristic outbursts they undergo. According to their recurrence time and brightness amplitude, these are classified into three groups: classical novae, recurrent
novae and dwarf novae. Novae draw their energy from thermonuclear reactions
on the white dwarf surface, while dwarf novae rely on accretion disc instabilities
to power their milder eruptions. The main distinction between classical and recurrent novae is that the former ones have not been observed to erupt more than
once, while recurrent novae seem to recur on timescales of decades or centuries.
Theoretical models based on the novae brightness decay curves predict that
white dwarfs in novae would have masses larger than 1 M⊙, even close to the
Chandrasekhar mass limit, which makes these systems type Ia supernova candidate progenitors. To test these predictions, it is necessary to carry out dynamical
studies that can provide accurate masses for both the white dwarf and its companion. This can only be done if the absorption lines of the secondary star are
detected in the spectrum. However, the accretion disc generally happens to be
the main contributor to the optical light, often overshining the secondary star.
For this reason, it is important to search for many systems where signatures of
the secondary stars can be seen.
In this work, we have compiled a sample of Galactic novae whose outbursts
were registered at least 50 years ago, so that they have had enough time to return to their quiescence state. We have analyzed the morphology of their spectral
energy distributions (SEDs), and determined whether the secondary star contributes significantly to the brightness of the system.
The SED of an object accounts for the variation with wavelength of its emitted
energy, and can be considered as a very low resolution spectrum that gives us an
idea of the shape of the continuum. The spectral continua of the white dwarf and
the secondary star can be approximated as black bodies with their same temperatures. However, the signature of the accretion discs is different: while the outer
parts are relatively cold (≃ 5000 K), the inner parts get hotter the closer they are
to the white dwarf, reaching temperatures of ≃ 30000 K in the innermost regions.
Therefore, the spectral continuum of such a disc can be regarded as the sum of the
black-body emissions of a series of rings whose temperatures decrease smoothly outwards. The result of this superposition is an emission curve that follows a
power law as a function of the wavelength.
In those cases where the emission of the accretion disc is the main contributor
to the optical light, the shape of the resulting SED will be that of a power law. On
the contrary, if the secondary star contributes significantly to the total flux, the
shape of the SED will be the result of the sum of a power law and a typical blackbody curve. It is precisely this that we will take advantage of to make a selection
of novae in which the secondary star may be detected.
To calculate the SEDs of our sample we have compiled photometric measurements from the optical to the mid-infrared from the following sky surveys: PanSTARRS Data Release 1 (PS1), Sloan Digital Sky Survey (SDSS), The AAVSO Photometric All-Sky Survey (APASS), Two Micron All Sky Survey (2MASS) and Wide-field
Infrared Survey Explorer (WISE). The magnitudes have been corrected for interstellar extinction using the distances inferred from Gaia data given by Bailer-Jones
et al. (2021) and the three-dimensional dust reddening map by Green et al. (2019).
We retrieved all this information using the VizieR Catalogue Service, an astronomical catalogue search application provided by the Centre de Données astronomiques de Strasbourg (CDS). To make the data treatment easier and produce the
SEDs of the objects in our sample, we have developed PYTHON codes that automate the calculation process.
We classified the systems based on the morphology of their SEDs and searched for spectroscopic evidence of their secondary stars in the literature. From
our results we can conclude that it is possible to infer whether the secondary star
appreciably contributes to the optical and infrared light of a nova remnant from
its photometric SED.
Of the 44 novae studied, besides six symbiotic stars, we found indication of
the presence of the secondary star in 12 of them. In three of these 12 cases previous spectroscopic studies had confirmed that the spectrum of the secondary
star is appreciable: GK Per, V841 Oph and BD Pav. This nicely shows the utility of
our photometric method. Signatures of the secondary star are not unambiguously
detected in the case of V368 Aql, in which a brighter spectrum towards the red
is observed. This, together with its long orbital period, is compatible with the secondary contributing significantly to the system brightness. Finally, NSV 11561
has been proposed as a K4 V star, but it might be the bright secondary star dominating its spectrum. It would be interesting to carry out radial velocity studies of
this system to clarify its nature.
2022-07-19T10:31:08Z
2022-07-19T10:31:08Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29102
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/42762023-02-03T06:01:36Zcom_915_668com_915_488col_915_678
Análisis del ciclo de actividad solar y su influencia en la Tierra
Torregrosa Alberola, Álvaro
Roca Cortés, Teodoro
2017-03-17T14:40:15Z
2017-03-17T14:40:15Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/4276
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/255172023-02-03T10:51:29Zcom_915_668com_915_488col_915_678
Entropy Production in Quantum Systems
Rivero Herrera, Fabián
Alonso Ramírez, Daniel
El objetivo de este trabajo es estudiar sistemas f´ısicos reales. Se empieza
con un modelo relativamente sencillo, el modelo de Jaynes–Cummings. Se ha elegido este
con el objetivo de ilustrar los conceptos sobre la entrop´ıa que hasta ese momento se han
desarrollado de forma te´orica. Al tratarse de un modelo simple de interacci´on y debido
a sus caracter´ısticas, no es el m´as adecuado para estudiar la relaci´on obtenida para la
producci´on de entrop´ıa, por lo que se trat´o un caso m´as.
Para ilustrar las cantidades que representan la producci´on de entrop´ıa se eligi´o un
modelo que en la literatura ya se utiliz´o para esto, de manera que se puede comprobar la
fiabilidad de los resultados obtenidos. Pero no solo se reprodujeron los resultados, se trat´o
de ir m´as all´a estudiando particularidades la evoluci´on temporal del sistema. La validez y
utilidad de la relaci´on que se escogi´o para la producci´on de entrop´ıa queda comprobada,
no solo por comparaci´on con resultados de los autores originales, tambi´en desde un punto
de vista f´ısico y basado en las interpretaciones que se justificaron en las partes del trabajo
anteriores a esta.
Finalmente, se da una introducci´on a los teoremas de fluctuaci´on. Estos son una
extensi´on de la segunda ley de la termodin´amica y un campo de estudio muy reciente (desarrollado sobre todo durante las ´ultimas d´ecadas). La complejidad matem´atica aumenta,
pero tambi´en el abanico de situaciones que se pueden estudiar. En este trabajo solo se
da una introducci´on al tema, lo suficiente para aplicar el formalismo a resultados te´oricos
vistos en las partes anteriores. Adem´as, se llega a un teorema de fluctuaci´on a partir de
otros teoremas que se pueden encontrar en la bibliograf´ıa. La importancia de los teoremas
de fluctuaci´on queda patente para entender su importante papel en la investigaci´on actual.
2021-10-05T07:38:51Z
2021-10-05T07:38:51Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25517
en
http://creativecommons.org/licenses/by-nc-nd/4.0/
info:eu-repo/semantics/openAccess
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
oai:riull.ull.es:915/250132021-11-05T09:02:04Zcom_915_668com_915_488col_915_678
Caracterización de un equipo para radioterapia intraoperatoria (ioRT-50)
Prieto González, Daniel
Garrido Bretón, Carlos
Torres Betancort, Manuel Eulalio
Radiofísica
Radioterapia intraoperatoria
ioRT-50
En España se diagnostican cerca de 30.000 nuevos cánceres de mama cada año [1][2], la
mayoría de ellos en mujeres. Los estadios en los que se suelen encontrar estos a la hora
de la detección son muy variados, pero, dadas las características de este tipo de
enfermedad, la acción más usual en el tratamiento es la extirpación del tumor mediante
cirugía (tumorectomía). Tras la operación, la paciente se somete a sesiones de radioterapia
externa hasta cinco o seis semanas después de la intervención.
Dicho procedimiento conlleva grandes esfuerzos físicos, psicológicos y económicos por
parte de la paciente y del equipo médico. La tendencia actual de la oncología es minimizar
este tipo de procesos, haciéndolos menos invasivos, con menor agresividad y más cortos
en el tiempo. Por ello, centros como el Hospital Universitario de Canarias (HUC) han
adquirido equipos como el ioRT-50 (intraoperative radiation therapy), de la empresa
alemana Ecklert & Ziegler-Womed (WOLF-Medizintechnik GmbH). Se trata de un
instrumento de última generación que permite la aplicación de radioterapia superficial e
intraoperatoria, adquirido por el hospital en diciembre de 2017 y puesto en marcha en el
verano de 2019 [3][4]
.
El ioRT-50 se puede utilizar en braquiterapia después de la extracción del cáncer de mama
bajo unos criterios clínicos específicos, relacionados con la edad de la paciente, el tamaño
y localización del tumor y el estiaje de la enfermedad . En el caso de que sea posible su
aplicación, el instrumento se introduce en el lecho tumoral, radiándolo para asegurar la
eliminación de posibles células cancerígenas residuales. De esta manera, la paciente no
tiene que someterse a una radioterapia posterior y despierta de la anestesia con el
tratamiento local realizado.
Para garantizar su seguridad, es necesaria una caracterización del haz de radiación emitido
y control de calidad que asegure el correcto funcionamiento del aparato. En el siguiente
documento se detalla el proceso llevado a cabo para la caracterización de los perfiles de
radiación, estudiando que los valores de dosis calculados concuerden con los resultados
facilitados por el fabricante.
En primer lugar, se obtiene la curva de calibración o curva sensitométrica. Este gráfico
permite relacionar la duración de los disparos y el oscurecimiento de las películas
radiológicas con la dosis radiada. Para ello, se utiliza un aplicador cilíndrico y películas
sensibles a la radiación denominadas ETB3. El montaje experimental consiste en láminas
de agua sólida, una cámara de ionización y un electrómetro, así como un barómetro y un
termómetro.
Doce películas se radian en diferentes tiempos, de treinta segundos a seis minutos.
Mientras tanto, la carga se detecta tres veces. Al realizar un ajuste lineal, se determinan
el resto de los valores. Se pasa de la carga a la dosis por factores de conversión. Las
películas se digitalizan con un escáner y se analizan mediante el software libre ImageJ.
Se obtiene el valor de oscurecimiento del píxel. Finalmente, este parámetro se relaciona
con los datos de dosis medidos mediante un ajuste hiperbólico. Una vez calculada la curva
de referencia, se trabaja con los aplicadores esféricos quirúrgicos. La siguiente parte de la caracterización consiste en determinar la duración de los disparos
con cada aplicador para irradiar una dosis de 12,5 Gy a 5 mm de profundidad. Para ello,
el proceso que se explica a continuación se repite con los cinco aplicadores. Estos
dispositivos tienen diámetros de 35 a 55 mm, aumentando de cinco a cinco. El montaje
experimental es el siguiente: películas diseñadas para el trabajo, software Rad-Control II
y una cuba llena de agua. Preparado el dispositivo, se irradian las películas necesarias. Se
realizan dieciséis tomas por cada aplicador, ochenta en total. Las películas se agrupan en
series de ocho. La duración de los disparos es de un minuto para los aplicadores SP-35,
SP-40 y SP-45 y de dos para los SP-50 y SP-55, todo ello con el objetivo de que los datos
no varíen mucho.
Una vez radiadas, las películas son recogidas. Se escanean y se analizan con ImageJ. El
trazador de rectas se utiliza para obtener el "Plot Profile" de las películas en diferentes
ángulos (40º, 50º, 70º, 90º, 130º, sus opuestos, y 180º). Los datos se agrupan en lotes de
cuatro películas en un archivo de Excel y abarcan desde el centro de la película hasta el
borde. A continuación, se selecciona el valor absoluto de un ángulo y se calcula la curva
PDD. Para ello, se realiza un ajuste en Excel. Se representa la tasa de dosis frente a los
valores de profundidad y se calculan los parámetros de la curva: la tasa de dosis
superficial y el coeficiente de atenuación.
Cuando se obtienen dichos coeficientes para todos los ángulos de un aplicador, se
sustituyen en una tabla. Se calcula la media y la dosis (12,5 Gy) se divide por ella. Así se
calcula la duración del disparo. El proceso se repite para las otras doce películas y para
todos los aplicadores. Los nuevos datos se comparan con los de la caracterización anterior
y se discuten sus características. Si no se han visto modificados en exceso, el resultado se
considera satisfactorio.
Finalmente, se comenta la aplicación del ioRT-50 en una sesión de radioterapia
intraoperatoria. En primer lugar, es calibrado por la mañana por el equipo de radiofísica.
Se trata de un proceso sencillo que dura menos de media hora. Si los resultados son
favorables, el instrumento es llevado a quirófano. Una vez extirpado el tumor y realizada
la actuación del equipo de medicina nuclear, se aplica el disparo con el tiempo calculado
por la calibración anterior. Luego, la incisión se cierra y finaliza la intervención. La
participación en un proceso de estas características es fundamental para la experiencia de
ser radiofísico.
About 30,000 new breast cancers are diagnosed in Spain each year [1][2], most of them in
women. The stages in which these are usually found at the time of detection are very
varied, but, given the characteristics of this type of disease, the most common action in
treatment is the removal of the tumour by surgery (lumpectomy). After the operation, the
patient undergoes external radiotherapy sessions up to five or six weeks after the
intervention.
This procedure involves great physical, psychological and economic efforts on the part
of the patient and the medical team. The current trend of oncology is to minimize this
type of process, making them less invasive, less aggressive, and shorter in time. For this
reason, centres such as the University Hospital of the Canary Islands (HUC) have
acquired devices as the ioRT-50 (intraoperative radiation therapy), from the German
company Ecklert & Ziegler - Womed (WOLF-Medizintechnik GmbH). It is a modern
instrument that allows the application of superficial and intraoperative radiotherapy,
acquired by the hospital in December 2017 and launched in the summer of 2019 [3][4]
.
IoRT-50 can be used in brachytherapy after breast cancer removal under specific clinical
criteria, related to the patient's age, tumor size and location, and disease stretching. If it is
possible, the instrument is introduced into the tumor bed, radiating it to ensure the
elimination of possible residual cancer cells. In this way, the patient does not have to
undergo subsequent radiotherapy and wakes up from the anesthesia with the local
treatment performed.
To make sure patient safety, calibration is necessary to secure the proper functioning of
the device. The following document details the process carried out for this purpose,
characterizing the radiation profiles, and studying that the dose values obtained agree with
the results of the last calibration and those provided by the literature.
First, you get the calibration curve or sensitometric curve. This graph allows to relate the
duration of the shots and the darkening of the radiological films with the radiated dose.
For this purpose, a cylindrical applicator and radiation-sensitive films called ETB3 are
used. The experimental assembly consists of solid water sheets, an ionization chamber,
and an electrometer. A barometer and a thermometer are also available.
Twelve films are broadcast at different times, from thirty seconds to six minutes.
Meanwhile, the load is detected three times. By making a linear adjustment, the rest of
the values are determined. It is passed from load to dose by conversion factors. The films
are digitized with a scanner and analysed by a free software, ImageJ. The darkening value
of the pixel is obtained. Finally, this parameter is related to the dose data measured by
hyperbolic adjustment. When the reference curve has been calculated, we work with the
surgical spherical applicators.
The next part of the calibration consists of determining the duration of the shot with each
applicator to radiate a dose of 12,5 Gy at 5 mm depth. To do this, the process explained
below is repeated with the five applicators. These devices have diameters from 35 to 55 mm, increasing from five to five. The experimental assembly is as follows: films designed
for the job, Rad-Control II software, and a bucket filled with water. Prepared the device,
the necessary films are radiated. Sixteen shots are made for each applicator, eighty in total.
The films are grouped into series of eight films. The duration of the shots is one minute
for the SP-35, SP-40, and SP-45 applicators. The duration for SP-50 and SP-55 is two
minutes. The goal is that the data does not vary much.
The films are collected. It is scanned with a scanner and analysed with ImageJ. The line
selector is used to obtain the “Plot Profile” of the films at different angles (40º, 50º, 70º,
90º, 130º, their opposites, and 180º). The data are grouped into batches of four films in
an Excel file and they are from the centre of the film to the edge. Then, the absolute value
of an angle is selected and the PDD curve is calculated. To do this, an adjustment is made
in Excel. The dose rate versus depth values is represented and the curve parameters are
calculated. These are surface dose rate and attenuation coefficient.
When they are obtained for all angles in an applicator, the values in a table are substituted.
The mean is calculated. The dose (12,5 Gy) is divided by this. Now, the duration of the
shot is gotten. The process is repeated for the other twelve films and for all applicators.
The new data are checked against those of the previous calibration and its characteristics
are discussed. If they have not changed much, the process has been successful.
Finally, the application of ioRT-50 in an intraoperative radiotherapy session is discussed.
First, it is calibrated in the morning by the radio physics team. It is a simple process, and
it lasts less than half an hour. If conditions are favourable, the instrument is taken to the
operating room. When the tumour is removed and the nuclear medicine team acts, the
device is prepared. The radiology equipment applies the shot with the time given by the
previous calibration. Then, the incision is closed and the operation ends. Having
participated in this experience is an important part of the work of a radio physicist.
2021-07-29T11:30:47Z
2021-07-29T11:30:47Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25013
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/215392021-11-05T09:01:59Zcom_915_668com_915_488col_915_678
Physical models of properties and structure of viral capsids
Hernandez Hernandez, Jose Javier
Gómez Llorente, José María
Los virus son los sistemas bi´ologicos m´as simples de la naturaleza, y por ello fueron
los primeros en ser tratados matem´aticamente. Es fundamental obtener la mayor
cantidad de informaci´on posible a trav´es de todas las ramas de la ciencia para obtener
una imagen completa de sus caracter´ısticas, dadas las propiedades emergentes del
conocimiento. Es por ello que las propiedades f´ısicas son tan importantes como las
biol´ogicas o las qu´ımicas. En este trabajo se introducen los principales modelos f´ısicos
(de autoensamblaje, cin´etica, elasticidad, etc.), con especial ´enfasis en las c´apsides
icosa´edricas debido a sus propiedades de simetr´ıa. A continuaci´on, se desarrolla
la base de un modelo coarse-grained de 60 unidades asim´etricas que junto a las
propiedades del grupo de simetr´ıa del icosaedro nos permite calcular el n´umero de
modos normal de un virus icosa´edrico sin hacer c´alculos expl´ıcitos. Tambi´en se
obtiene informaci´on cualitativa del comportamiento de estos modos. Estos resultados
son despu´es comparados con c´alculos reales de los modos normales del virus del Zika
llevados a cabo por un grupo surcoreano [1] con buenos resultados.
Viruses are the simplest biological systems in nature, and because of that they
were the first to be treated mathematically. It is fundamental to obtain as much
information about them through all branches of science as possible to be able to get
a full picture of their characteristics, due to the emergent properties of knowledge.
Therefore, their physical properties are as important as their biological or chemical
ones. We introduce some of the main physical models (self-assembly, kinetics, elasticity, etc.), with special emphasis on icosahedral capsids because of their symmetry
properties. We then develop the basis of a 60 asymmetric units coarse-grained model
that in conjunction with the symmetries of the icosahedral point group, allow us
to calculate the number of normal modes of an icosahedral virus without making
explicit calculations. We also gain some qualitative information about the behaviour
of the normal modes. These results are then compared with the actual calculations
of the normal modes of the Zika virus made by a South Corean reasearch group [1],
with good agreement.
2020-10-06T10:30:20Z
2020-10-06T10:30:20Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/21539
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/162572021-11-05T09:01:48Zcom_915_668com_915_488col_915_678
Propiedades físicas de las cápsides virales icosaédricas: modelos de potenciales de interacción y constantes de fuerza.
Bacallado Rivero, Adrián
Gómez Llorente, José María
Virology is a research field that needs physics to understand the behaviour of
viruses, since there are a lot mechanisms that use thermodynamics, kinetics or
electrostatics. These are some of the viral properties that we are going to study
and explain.
In this final degree work, we will start explaining the viruses in their fundamentals, introducing the capsids, the envelope of the viruses. We will consider
only the case of the icosahedral viruses, since this geometrical form is the one
that appears the most in nature. These capsids are formed by protein subunits, the
capsomers. Icosahedral capsids are described by Caspar and Klug’s models, since
they introduce the triangular number T, a very important parameter in virology.
One of the most fastinating feature is the auto-assembly of viral capsids. This is a
feature that we will explain via thermodynamics and kinetics.
We will study as well the electrostatic interaction between the capsomers and
the capsomers in the formed capsid through Poisson-Bolztmann’s equation. Another physical feature that we will study is the mechanical properties. Viruses endure external forces in their environment and the osmotic pressure that the genome
applies to the capsid. That force can be measured and studied.
The main focus of this work will be on the models that explain the interaction potential. First, we will explain the two different models that explain these
interactions: Coarse-grained and All-atom. Then, we will explain our two-body
interaction model, that is a Coarse-grained type, using trimers, a kind of triangular capsomers. Afterwards, we will introduce the variables that characterize the
trimer orientation and the equilibrium conditions that fixes the privileged orientation of the trimers in order to form a capsid. Then, we will calculate the second
derivative matrix of the interaction potential in order to calculate the force constants. Finally, we apply the equilibrium conditions to the matrix to obtain the force
constants.
2019-10-02T13:40:38Z
2019-10-02T13:40:38Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/16257
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/86972021-11-05T09:02:22Zcom_915_668com_915_488col_915_678
Fluorescence intensity ratio and whispering gallery mode techniques in optical temperature sensors. Comparative study
Paz Buclatin, Franzette
Martín Benenzuela, Inocencio Rafael
Física
2018-06-20T12:05:19Z
2018-06-20T12:05:19Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/8697
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/250072021-11-05T09:02:01Zcom_915_668com_915_488col_915_678
Introduction to the analysis of systems in interaction with thermal baths: Langevin approach
Almanza Marrero, José Antonio
Ruiz García, Antonia
Langevin
non-equilibrium
Brownian motion
Este trabajo se plantea como una introducci´on al estudio de la din´amica de sistemas que se
encuentran en interacci´on con ba˜nos t´ermicos. Abordaremos dos escenarios: la interacci´on
con un ´unico ba˜no t´ermico, y la interacci´on con dos ba˜nos t´ermicos distintos. En el primer
escenario el sistema evoluciona hasta alcanzar un estado estacionario de equilibrio t´ermico con
el ba˜no. Mientras que en el segundo escenario la acci´on combinada de los distintos ba˜nos
determina la evoluci´on del sistema hacia un estado estacionario de no equilibrio en el que
emergen propiedades de transporte, caracterizadas por corrientes de calor entre los ba˜nos y el
sistema.
En el caso de la interacci´on con un ´unico ba˜no consideramos sistemas sencillos en los
que es posible hacer una resoluci´on anal´ıtica de la din´amica. Abordaremos adem´as la resoluci´on num´erica de las ecuaciones diferenciales estoc´asticas que describen dicha din´amica.
Mostraremos el buen acuerdo alcanzado entre los resultados num´ericos y anal´ıticos. El an´alisis
de sistemas en interacci´on con distintos ba˜nos t´ermicos se realizar´a principalmente en base a la
resoluci´on num´erica de la din´amica.
Comenzamos nuestro estudio introduciendo todo el formalismo necesario para la descripci´on
del movimiento browniano dentro del marco te´orico empleado. Dicho marco lo proporciona la
descripci´on de Langevin, en este modelo la acci´on del ba˜no t´ermico se traduce en dos t´erminos
de caracter´ısticas bien diferenciadas. Por un lado tenemos el t´ermino de fricci´on describiendo
que la asimetr´ıa del acoplamiento entre unos pocos grados de libertad lentos y muchos r´apidos
conduce a un flujo de energ´ıa del primero al segundo, que es el fen´omeno de la disipaci´on de
energ´ıa [1]. En contraposici´on, est´a el t´ermino conocido como fuerza estoc´astica o fuerza de
langevin que da cuenta de las incesantes colisiones que sufre la part´ıcula browniana con aquellas
del medio que la rodea.
Una vez establecidos los fundamentos te´oricos pasamos a la resoluci´on de algunos casos particulares de sistemas que alcanzan el equilibrio donde vemos como se caracterizan estos estados,
se presentan m´etodos alternativos para la soluci´on de la din´amica del sistema y se comprueba la
concordancia de los resultados anal´ıticos que se van obteniendo con las simulaciones num´ericas
llevadas a cabo.
Por ´ultimo, pasamos al estudio de sistemas fuera del equilibrio donde introducimos el concepto de equilibrio t´ermico local, un resultado que nos permite extrapolar consideraciones
propias de sistemas en equilibrio a sistemas fuera de ´el. Bas´andonos en esto, caracterizamos la
temperatura y los flujos de energ´ıa que aparecen en el sistema.
This work is presented as an introduction to the study of the dynamics of systems in contact
with thermal baths in the theoretical framework of the Langevin model. In this framework,
both systems in and out of equilibrium will be studied. Characterizing them based on the
values of the kurtosis of their velocity distribution.
Although we only study very simple models, understanding the results that will be shown
below would provide the necessary tools to carry out more complex studies. Either increasing
the number of particles in the system or considering other types of interaction potentials. In
systems of these characteristics anomalous transport phenomena emerge. [2] [3]
In chapter one we start by introducing the basic concepts necessary to characterize stochastic
processes, which are then applied to the specific case of Brownian motion. We also explain the
Langevin model, which will be used to define the thermal baths in this study.
In chapter two we focus on systems that are in contact with a single thermal bath. These
systems reach equilibrium when sufficient time has elapsed. Here we will study the behaviour
of the mean values of the different dynamic quantities of the particle in the transition regime
to equilibrium and once equilibrium has been reached. We analyze two systems: a free particle
and a particle confined in an harmonic potential. In this case it is possible to find the analytical
solution of the dynamics.
Chapter three is devoted to the study of systems that are in contact different thermal bath.
In this case the combined action of different thermal baths determines the steady state to be
non-equilibrium. We characterise such states in terms of the kurtosis and introduce the concept
of local thermal equilibrium (LTE).
Although for such systems it becomes almost impossible to obtain analytical solutions, we
show a semi-analytical method that allows us to analyse the state of the system once it is in
the non-equilibrium steady state.
The Einstein approach to Brownian motion, the proof of the Central Limit Theorem and the
description of the Platen‘s algorithm for the numerical resolution of the dynamical equations
are presented on the appendix.
2021-07-29T11:15:34Z
2021-07-29T11:15:34Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25007
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/96262021-11-05T09:02:23Zcom_915_668com_915_488col_915_678
La conexión entre las mecánicas clásica y cuántica: modelos semiclásicos
Díaz Calzadilla, Pablo
Gómez Llorente, José María
Física
2018-07-20T08:20:20Z
2018-07-20T08:20:20Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9626
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/291202022-11-15T13:39:21Zcom_915_668com_915_488col_915_678
Estudio de las causas sinópticas de las lluvias intensas en Canarias.
González Sicilia, Pablo
González Fernández, Albano José
Expósito González, Francisco Javier
Grado En Física
The climate in the Canary Islands is characterized by quasi permanent stability
due to the air stratification over the archipelago. This configuration of layers is composed of a low and moist layer of air that flows over an isothermal surface, mainly
because of the presence of the Canary Islands current and the trade winds from NE, a
layer of thermal inversion that oscillates yearly between 700 m and 1500 m, and finally
a dry layer of air with a NW winds circulation almost all year. Due to this, the climate
perturbations in the islands are induced by the alteration of this layer configuration
and therefore, these constitute the main generators of intense rainfall in the islands.
In this work, we study the synoptic configurations that produce “heavy rain” in
the archipelago to classify the main causes of them, and their spatial and temporal
distribution in the islands. To do this, we made a selection of dates of “heavy rain”
and then, study manually the synoptic patterns that cause them, splitting the synoptic
situations into three main types, Lows, Troughs, and Cutoff Lows, to create a temporal
series of the precipitation data.
Then, to study the spatial distribution of the rain, we separate the times series
of precipitation associated with each one of the synoptic patterns by:
Separation by sectors in the surroundings of the archipelago.
Creation of rain zones by clustering algorithms.
Pressure, geopotential height, and temperature mean anomalies, were calculated
for each one of the sectors to identify the mean synoptic patterns. With this, various
statistical parameters were obtained for the precipitation data, based on the sectors
and rain zones, and so to study the spatial distribution of rain among the islands.
The results obtained for this work show that the method developed for the study
is capable of identifying the main generators of “heavy rains” and their temporal and
spatial distribution, but it cannot be used to make predictions about the mean values
of rainfall.
In general, during the studied period, the rainfall pattern occurs throughout the
winter months with no heavy rainfall events for the months from April to September.
In addition, the synoptic situations with the highest number of associated dates are
the Troughs, followed by the Lows, and the Cutoff Lows. Finally, the spatial distribution of “heavy rains” is highly conditioned by the orography, with the western islands
registering the highest rainfall intensity values.
El clima de Canarias se caracteriza por una gran estabilidad debido a la estratificaci´on del aire sobre el archipi´elago. Esta configuraci´on de capas est´a compuesta por
una capa baja y h´umeda de aire que discurre sobre una superficie isot´ermica, principalmente por la presencia de la corriente de Canarias y los vientos alisios del NE, una
capa de inversi´on t´ermica que oscila anualmente entre 700 m y 1500 m, y finalmente
una capa de aire seco con una circulaci´on de vientos del NW casi todo el a˜no. Debido
a esto, las perturbaciones clim´aticas en las islas son inducidas por la alteraci´on de esta
configuraci´on y por tanto, estas constituyen las principales generadoras de precipitaci´on
intensa en las islas.
En este trabajo, se estudian las configuraciones sin´opticas que producen “lluvias
intensas” en el archipi´elago y as´ı clasificar las principales causas de estas en las islas,
as´ı como su distribuci´on espacial y temporal. Con este prop´osito, se llev´o a cabo una
selecci´on de fechas de “lluvias intensas” y luego se estudi´o manualmente los patrones
sin´opticos que las causan, dividiendo las situaciones en tres tipos principales, Borrascas,
Vaguadas y DANAs, para crear una serie temporal de los datos de precipitaci´on.
Luego, para estudiar la distribuci´on espacial de la lluvia, se separaron las series
temporales de precipitaci´on asociadas a cada uno de los patrones sin´opticos mediante:
Separaci´on por sectores del entorno del archipi´elago.
Creaci´on de zonas de lluvia mediante algoritmos de clusterizaci´on.
Se calcularon anomal´ıas medias de presi´on, altura geopotencial, y temperatura
para cada uno de los sectores con el fin de identificar los patrones sin´opticos medios, y
se obtuvieron diversos par´ametros estad´ısticos de los datos de precipitaci´on, en funci´on
de los sectores y zonas de lluvia, para estudiar la distribuci´on de la lluvia en las islas.
Los resultados obtenidos para este trabajo, muestran que el m´etodo desarrollado
es capaz de identificar las principales situaciones sin´opticas generadoras de “lluvias
intensas” y su distribuci´on temporal y espacial, pero no puede ser utilizado para hacer
predicciones sobre los valores medios de lluvia.
En general, durante el periodo estudiado, el r´egimen de lluvias ocurre a lo largo
de los meses de Invierno, sin precipitaciones durante los meses de abril a septiembre.
Adem´as, las situaciones sin´opticas con mayor n´umero de fechas asociadas son las Vaguadas, seguidas de las Borrascas y las DANAs. Finalmente, la distribuci´on espacial
de las “lluvias intensas” est´a muy condicionada por el factor orogr´afico, siendo las islas
occidentales las que registran los valores m´as altos de intensidad pluviom´etrica.
2022-07-19T11:00:42Z
2022-07-19T11:00:42Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29120
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/185702021-11-05T09:01:53Zcom_915_668com_915_488col_915_678
Predicción de irradiancia solar mediante modelos numéricos
Rojano Padrón, Alejandro
González Fernández, Albano José
Pérez Darias, Juan Carlos
Energía solar
Radiación
Meteorología
Due to the growing importance of renewable energies in our energy system, and
in particular photovoltaic solar energy, it leads us to take on new challenges to try
to optimize the energy production. One of these challenges is related to the high
dependence that exists between the meteorological conditions of the
environment and the power generation capacity in power plants. The companies
supplying energy must communicate sufficiently in advance, between one and
two days before, the energy that their plants will produce at any given time, and
in the case of solar energy, the clouds play a fundamental role in the variability of
the irradiance received by the solar panels, and therefore, in their production.
In this paper, the capacity of the simulations performed with the atmospheric
model WRF (Weather Research and Forecasting) will be analyzed, examining
different predictions for a set of cases of study.
For this purpose, firstly, the model must be correctly configured for the region to
be studied and, subsequently, the influence of the different physical
parameterizations on the irradiance calculation must be analysed and compared
with experimental data.
Debido a la creciente importancia de las energías renovables en nuestro sistema
energético, y en concreto de la energía solar fotovoltaica, nos lleva a asumir
nuevos retos para tratar de optimizar la gestión de la energía. Uno de dichos
desafíos está relacionado con la alta dependencia que hay entre las condiciones
meteorológicas del entorno y la capacidad de generación de energía en las
centrales eléctricas. Las empresas suministradoras de energía deben comunicar
con suficiente antelación, entre uno y dos días antes, la energía que sus plantas
van a producir en cada momento, y en el caso de la energía solar, las nubes
juegan un papel fundamental en la variabilidad de la irradiancia recibida por las
placas solares, y por lo tanto, en su producción.
En este trabajo se analizará la capacidad de las simulaciones realizadas con el
modelo atmosférico WRF (Weather Research and Forecasting), examinando
distintas predicciones para una serie de casos de estudio.
Para ello, primero se deberá configurar correctamente el modelo para la región a
estudiar y, posteriormente, se analizará la influencia de las diferentes
parametrizaciones físicas en el cálculo de la irradiancia y se compararán con
datos medidos en tierra.
2020-02-27T12:05:15Z
2020-02-27T12:05:15Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/18570
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/200572021-11-05T09:01:54Zcom_915_668com_915_488col_915_678
Study of the optical properties of different crystals doped with neodymium luminescent ions under extreme conditions of temperature and pressure
Guillermo Cabrera, María
Lavín Della Ventura, Víctor
Rodríguez Mendoza, Ulises Ruymán
A characterization of the optical properties of three crystalline lutetium garnets doped
with trivalent neodymium 𝑁𝑑3+ luminescent ions has been carried out. For this purpose, steadystate optical spectroscopy has been used to measure the absorption and luminescence spectra,
as well as time-resolved spectroscopy to calculate the lifetimes of the excited states of the
𝑁𝑑3+ ions in 𝐿𝑢3(𝐺𝑎𝑥𝐴𝑙1−𝑥)5𝑂12 garnets. It has also been measured how the luminescence
changes with temperature, in order to calibrate a low-temperature optical sensor. Furthermore,
an experimental equipment has been developed to determine the pressure in the sample’s
chamber of a diamond anvil cell.
2020-06-30T10:32:17Z
2020-06-30T10:32:17Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20057
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/4312021-11-05T09:02:09Zcom_915_668com_915_488col_915_678
Estudio mecano-cuántico de materiales desde primeros principios : propiedades elásticas y estabilidad del EuVO4
Jorge Montero, Alejandro
Muñoz González, Alfonso
Rodríguez Hernández, Plácida
Física
Teoría cuántica
Física del estado sólido
2014-10-09T13:00:05Z
2014-10-09T13:00:05Z
2014
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/431
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras
derivadas 4.0 internacional)
oai:riull.ull.es:915/106062021-11-05T09:02:21Zcom_915_668com_915_488col_915_678
Estudio y caracterización de materiales luminiscentes con propiedades de conversión espectral para aplicaciones fotocatalíticas
Padrón González, Ubay
Méndez Ramos, Jorge
Física
2018-10-10T08:40:05Z
2018-10-10T08:40:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10606
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/291242022-11-17T10:18:43Zcom_915_668com_915_488col_915_678
Raíces filosóficas del pensamiento científico
Alonso Negrin, Rubén
Pérez Cruz, Justo Roberto
Grado En Física
The beginning of this work takes us back to the ancient Greece, where the
first philosophers laid the foundations for rational thought, the logos. The Ionian
physicists, the Pythagoreans, Heraclitus and Parmenides, or the pluralists already
raised some of the great questions of philosophy and, in particular, some of the
points that we will deal with, concerning the relationship between philosophy and
modern science: change, the qualities, the being…
After the sophistic stage, Socrates put philosophy back on track, and it was one
of his pupils, Plato, who was responsible for developing the first great
philosophical system of the West. With his theory of Ideas, Plato discovered
immaterial being and thus metaphysics. He therefore opened up a new field of
knowledge, the implications of which were to serve not only the early Christians,
for example St. Augustine, to develop a philosophy based on their faith. Even in
the scientific revolution, both in Galileo's mathematical universe and in the laws of
nature as understood by Newton and Descartes, it is difficult not to see a certain
echo of Platonism.
The next great system of Greek philosophy was carried out by one of Plato's
pupils, Aristotle. The philosopher of Stagira went against his master,
understanding the essence of being, not as something separate from the body, but
inseparable from it. Being is not only matter, but matter and form, forming an
indissoluble compound. The Stagirite understood that only in this way was it
possible to explain change, which had been, since Heraclitus and Parmenides, the
great question of Greek thought. Aristotle will inspire medieval scholasticism,
especially St. Thomas Aquinas, which will shape the Aristotelian-scholastic image
of the world, to which the scientific revolution will be so firmly opposed. However,
as we shall see at the end of this paper, the thought that emerged as a result of the
scientific revolution was not able to explain the world to the same extent as the
Aristotelian categories.
Then Alexander the Great inaugurated the Hellenistic era. For the first time in
the history of the West, philosophy and natural science were separated. The former,
with its capital in Athens, focused on man and how he should live his life. Stoics,
Epicureans, Cynics and Skeptics were the great schools of the Hellenistic era. The
natural sciences flourished in Alexandria, around the library and museum. Euclid,
Aristarchus, Archimedes and Hipparchus are some of the great names of this
period.
We leave the Greek world behind to plunge into Rome. Philosophically, it could
be said that it was the Helad that conquered Rome; the Hellenistic and Platonist
schools continued in Middle Platonism and Neoplatonism, which inspired the first
Christian philosophers. In strictly scientific matters, theoretical knowledge gave
way to practical knowledge, Ptolemy and Galen being the only new exceptions.
The Christian Middle Ages began by uniting Platonism with faith in Christ, as
we have already mentioned. Patristics was followed by scholasticism, which found
its greatest exponent in St Thomas, and with him, Aristotelianism definitively
found its place in Christian thought. In the scientific sphere, from the 13th century
onwards, interest in the natural sciences was reborn in Saint Albert the Great,
Rogerius Bacon and Grosseteste, for example. In fact, we could say that modern
science found its foundations in the late Middle Ages. The medieval period was
brought to a closing by William of Ockham, who dynamited traditional
metaphysics, and with it the relationship between faith and reason on which much
of Christian philosophy had been based. He thus opened the way for what was to
become modern science.
The Renaissance is, above all, a period of change; literary, political and even
magical interests. We also find some predecessors of the scientific method, such as
Leonardo da Vinci and Telesius. On the other hand, the religious issues arising
from the Reformation will greatly influence the philosophy born in the scientific
revolution.
The revolution, in its purely scientific aspect, ranges from Copernicus to
Newton. However, in its philosophical aspect, on which we will focus, there are
four characters to take into account: Bacon, Galileo, Descartes and Newton.
The first, although an advocate of the experimental method, can hardly be
included among the fathers of modern science, since the complexity of his method
makes it impracticable. However, the eminently practical character that, according
to him, science should have, makes him a forerunner, and perhaps the spiritual
father of the industrial revolution.
Galileo puts forward a new concept of experiment in which the mind, through
theory, plays an active role in the process of observing nature. To this end, he
postulates a Universe written in mathematical language which allows its precise
description, as well as the prediction of its behaviour. Hence, the modern scientific
method was born, leaving behind all unquantifiable qualities and finality in nature.
The founder of modern philosophy, René Descartes, begins his philosophy with
a universal doubt that calls into question all knowledge. In this way, using his
famous method, he constructed a philosophy of mechanistic character, in which the
dualism between matter (understood as pure extension) and the mind (totally
distinct from matter) stands out.
At last, Isaac Newton culminated the scientific revolution. He took up the
principle of economy on which Ockham had based his philosophy and, in the light
of his theory of gravitation, he conceived a uniform nature indistinguishable in
heaven and earth, and postulated a corpuscular world, extensive and
impenetrable, with no other quality than its motion and inertia. These corpuscles
move in obedience to the laws of nature, which ultimately have a theological
foundation.
We will end by discussing some of the notions that the mechanistic worldview
neglects, such as teleology or form. We will see how these conceptions entail major
problems which, nevertheless, Aristotelian philosophy manages to solve with
solvency
2022-07-19T11:01:14Z
2022-07-19T11:01:14Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29124
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/215462021-11-05T09:01:58Zcom_915_668com_915_488col_915_678
Selección de objetos para los estudios de arqueología galáctica con el instrumento WEAVE
García Jiménez, Francisco Javier
Battaglia, Giuseppina
After the success of the Gaia mission launched in 2013, which was intended to be
operational for 5 years, providing measurements of parallax, positions and proper
movements of stars and photometry; It was proposed to extend the mission time for
another 5 years due to its excellent results. But this time the Gaia probe will not be
alone in its mission, it is intended to use the data that Gaia provides so that the WEAVE
instrument (WHT Enhanced Area Velocity Explorer), which is a multi-object
spectrograph that is intended to be incorporated into the William Herschel telescope
(WHT) from the Roque De Los Muchachos observatory located on the island of La
Palma, supplement the measurements and information collected by Gaia and combine it
with its own measurements.
Four studies related to galactic archeology are planned with the WEAVE instrument,
but the one related to this TFG is a low-resolution study of the high latitude of the
Milky Way. In this mapping, the WEAVE instrument carries out a study of stars found
in a certain latitude in the sky that corresponds to a field of view in which it is intended
to have a view of the galactic halo and the thick disk, but eliminating the thin disk.
On its own, Gaia is capable of measuring parallax to determine distances, proper
movements to determine tangential velocities and low-resolution spectroscopy, it also
has a radial velocity spectrometer that provides chemical information about stars and
radial velocities as its name suggests. However, for magnitude intervals between
16 <~ G <~ 21 mag, the radial velocity spectrometer is unable to provide such
information and this is where the WEAVE instrument comes into play, complementing
Gaia with measurements of basic chemical information and radial velocity
measurements (aside from the various studies that WEAVE plans to conduct, but not
related to this work). However, the problem is that the WEAVE instrument is not
capable of performing measurements for as many stars as Gaia so it is necessary to
select exhaustively the elements to study.
The following are chosen as targets for the study of the high latitude of the galaxy: red
giants, candidates for extremely metal-poor stars, stars from the blue horizontal branch,
and stars that have deviated from the main sequence. Within WEAVE, a selection has
already been made for the giants using magnitude and color, as well as parallax and
self-motion information provided by Gaia's astrometric instrument to reject local red
main sequence stars from the selection.
Although this selection is already good enough, there is still a fraction of unwanted stars
that fall within the range of quite considerable selected parameters, so it is thought that
this selection can be optimized. That is precisely the objective of this study, to optimize
the selection of stars based on the data collected from Gaia's low-resolution
spectrometer.
In order to do this, we are going to start with a synthetic spectra that are used to
simulate the data set that in advance will be supplied by Gaia's low resolution
spectrograph (which are not yet available), and in this way to create a model that allows
us, using these already known data (those from the synthetic spectrum), to obtain a
relationship that works to a greater or lesser extent, to later be able to differentiate the
two types of stars that are measured in the mission, which we call giants and dwarfs,
requiring only flow measurements.
To do this, in the first place, the data from the synthetic spectra library are transformed
so that these are expressed in the same way that the Gaia spectrometer will provide its
measurements, making transmitivity corrections, change of units, etc.
To do this, we started from a code provided by the researcher Sergey Koposov, in which
those transformations were carried out and from which the rest of the research and
subsequent analysis could be developed, once the code was analyzed, the main task was
understanding what was carried out in each part and looking if there was some kind of
optimization flaw in it.
In this study, the different properties of the set of synthetic "stars" provided by the
synthetic spectra are analyzed, such as the metallicity and effective temperature
distributions with respect to flow and logarithm of surface gravity, the distribution of
fluxes with respect to wavelengths by means of percentiles or the various properties that
highlight the differences in magnitude, among several others.
The most promising method has seemed to be one based on comparisons of the different
magnitude differences at different frequencies between the two groups of stars, reaching
quite promising results with a selection of stars with a low percentage of contamination,
however, the results they have turned out to be perhaps insufficient by themselves,
although in collaboration with the WEAVE distinction method it could be quite useful
as a purge in certain cases.
Apart from this, several paths have been followed which, if they have not all reached
the desired method, have provided us with very interesting information on the
characteristics of the star distributions and are worthy of study and comment.
From here on, the introduction section provides a scientific context about the project, as
well as its motivation and some relevant data, the objectives section talks about the
purpose of the work itself and what it is intended to obtain ; In the methodology section,
it is explained how, starting from a series of synthetic data, we are going to operate with
them to arrange them in a way that we can use them to investigate as if they were real
measurements. In the different analyzes that are proposed, we will study the different
methods followed and briefly discuss the results obtained with each of them, and
finally, in the conclusions section, the information obtained and the result itself of the
work carried out are discussed above all, finally several reasons are discussed why it
would be interesting to continue investigating the method used.
2020-10-06T10:45:57Z
2020-10-06T10:45:57Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/21546
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/62002021-11-05T09:02:16Zcom_915_668com_915_488col_915_678
Introducción a la Teoría Cuántica de Campos: Electrodinámica Cuántica.
Álvarez Reyes, Rafael Juan
Delgado Borges, Vicente
2017-09-26T08:47:09Z
2017-09-26T08:47:09Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/6200
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/57882021-11-05T09:02:12Zcom_915_668com_915_488col_915_678
Whispering Gallery Modes temperature sensor using a holmium doped glass microsphere.
Sousa Viera, Laura Marina de
Martín Benenzuela, Inocencio Rafael
Ríos Rodríguez, Susana
2017-07-21T13:15:10Z
2017-07-21T13:15:10Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/5788
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/257352021-11-05T09:01:52Zcom_915_668com_915_488col_915_678
Estudio de las causas sinópticas de la precipitación en Canarias mediante modelos climáticos. Presente y futuro
Delgado González, Daniel
González Fernández, Albano José
Estudio estadístico de bases de datos de predicciones y datos reales de
precipitación de las Islas Canarias mediante Análisis de Componentes Principales
(PCA), técnicas de Clustering y conocimientos de meteorología para predecir el
cambio de los patrones de lluvia en las Islas Canarias en el futuro.
This is a statistical study of prediction databases and observational rainfall data of
the Canary Islands using Principal Component Analysis (PCA), clustering techniques
and meteorological knowledge to predict the change of rainfall patterns in the Canary
Islands in the future.
First of all, we have to know that we are going to work with databases that
represent the observations of precipitation and sea level pressure in the Canary
Islands or their surroundings, and predictions of these same variables. These data
covers three periods of time, one for the recent past (1980-2009) and two for the future
(2030-2059 and 2060-2099). The data for the future periods cover two scenarios related
to global emissions paths (RCP-4.5 and RCP-8.5). The data gathered from the past,
corresponds to predictions and real data, and the data from the future, obviously
corresponds to climate projections.
Specifically, the observational precipitation data for the recent past are directly
extracted from the SPREAD database (R. Serrano-Notivoli, Beguerıa y col., 2017 ). This
is a high-resolution gridded precipitation dataset covering Spain. This was
constructed by estimating precipitation amounts and their corresponding uncertainty
at each node on a 5x5 km grid. Sea level pressure data around the islands were
extracted from the ERA5 reanalysis (Hersbach y col., 2020).
Apart from this, other databases are used, that correspond to regional climate
models, which predict sea level pressure and precipitations, i.e., they are not
observational data, but simulations of these variables. Specifically, three databases are
used, each one associated to the global climate model used for its generation: GFDL,
IPSL and MIROC. Both, past and a future simulations, were provided.
These models, which are regional climate simulations, have been performed with
the WRF model (Non Hidrostatic Weather and Research Forecasting- WRF/ARW
v3.4.1) using a unidirectional triple nesting configuration with grid resolutions of
27x27 km, 9x9 km and 3x3 km. These simulations were carried out by the Group of
Earth and Atmospheric Observation (GOTA) of the University of La Laguna (ULL).
The used domain is centered in the Northeast Atlantic region and covers a large area
to capture the main mesoscale processes affecting the Canary climate, while other
more internal domains are centered in the Canary archipelago. The WRF version and
the physical parameterizations that they used to represent the different subgrid-scale
atmospheric processes were selected by GOTA according to previous work in the same study area (Pérez y col., 2014) (Expósito y col., 2015).
Now that the data used in this study have been explained, the methodology is
outlined. First, some statistical methods are applied to the aforementioned databases
to extract some features and information.
In this study, among other methods, we use Principal Component Analysis (PCA),
which is a mathematical technique to summarize the information contained in a set of
data by means of other independent parameters; more specifically, it is a rotation of
the coordinate axis of the original variables to new orthogonal axes, so that these axes
coincide with the direction of maximum variance of the data. In this case, the data to
which we apply this method are the daily rainfall values, and the axes correspond to
each of the land pixels of the Canary Islands of the SPREAD database. In this way, we
manage to group the pixels of the islands in different groups in which rainfall is
correlated.
Although with this method we could already have a grouping of pixels with a
certain correlation in terms of rainfall, what we do now is, with the axes rotated by the
PCA performed, to apply some Clustering technique to group the pixels in different
regions. This should give us a coherence of the regions a little higher than the
groupings that were made with the PCA. Specifically, we use the K-means method to
divide the pixels of the Canary Islands in 6 groups.
The weather types for each day are determined from the sea level pressure values
measured at certain points of a grid located over the Canary Islands. We use the
formulas proposed by Jones y col., 2013. Once we have defined the type of weather
(WT) for each day, and the amount of daily precipitation related to each of our pixel
groups (regions), we can elaborate heatmaps representing the percentage of rain and
annual mean precipitation or heavy precipitation days related to each WT and region.
Once we have each heatmap related to the past and to every RCP scenario of the
future, we discuss them and extract some features of these heatmaps that we obtained
from the aforementioned databases. These heatmaps could throw some light on how
the patterns of rain in the Canary Islands could evolve from now to the next decades.
Lastly, we mention some starting points on what next studies related to this subject
could be based on.
2021-10-22T09:45:41Z
2021-10-22T09:45:41Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25735
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/106112021-11-05T09:02:21Zcom_915_668com_915_488col_915_678
Introduction to intermolecular forces
Wenzel Argüelles, Rubén Thor
Bretón Peña, José Diego
Física
2018-10-10T09:00:05Z
2018-10-10T09:00:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10611
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/32082021-11-05T09:02:12Zcom_915_668com_915_488col_915_678
Ley de Titius-Bode en sistemas exoplanetarios
Mallorquín Díaz, Manuel
Roca Cortés, Teodoro
Astronomía y Astrofísica
Galaxias
Sistema solar
2016-10-13T13:45:20Z
2016-10-13T13:45:20Z
2016
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/3208
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/290962022-11-17T11:31:24Zcom_915_668com_915_488col_915_678
Verificación anual de un conjunto de aplicadores utilizados para radioterapia intraoperatoria en el HUC
Azkonobieta Carballo, Maite
Torres Betancort, Manuel Eulalio
Garrido Bretón, Carlos
Grado En Física
En la actualidad, el sector de la radiof´ısica est´a creciendo en Espa˜na, gracias a las
inversiones que est´an haciendo los hospitales p´ublicos en equipos para tratamientos como
la radioterapia. En concreto, el Hospital Universitario de Canarias (HUC) ha adquirido
el equipo ioRT-50 (Intraoperative radiation therapy - 50 keV) que puso en marcha en
2019 para realizar t´ecnicas de radioterapia superficial y radioterapia intraoperatoria. El
equipo consiste en un brazo articulado que sostiene un tubo de rayos X de baja energ´ıa,
un dep´osito de agua y un conjunto de aplicadores, que permite impartir tratamientos de
radioterapia.
Estos aplicadores emiten una distribuci´on de radiaci´on caracter´ıstica, y para medirla
se utiliza una c´amara de ionizaci´on y pel´ıculas radiocr´omicas. Se trabaja con un voltaje
pico de 70 kV y tiempos de disparo de 1 y 2 minutos. El par´ametro que se mide es la
dosis [Gy] y se relaciona con la profundidad [mm].
Se realiza la verificaci´on anual de los aplicadores y se dise˜na un nuevo m´etodo de
medida para su comprobaci´on, el cual se ha incorporado en el Hospital Universitario.
Adem´as los datos obtenidos se incluir´an en la base de datos del sistema del Hospital.
Currently, the radiophysics sector is growing in Spain, thanks to the investments that
public hospitals are making in equipment for treatments such as radiotherapy. Specifically,
the University Hospital of Canary Islands (HUC) has acquired the ioRT-50 (Intraoperative
radiation therapy - 50 keV) equipment, launched in 2019, to perform superficial
radiotherapy and intraoperative radiotherapy techniques. The equipment consists of an
articulated arm that holds a low-energy X-ray tube, a water tank and a set of applicators,
which allows imparting treatments of radiotherapy.
These applicators emit a characteristic radiation distribution. To measure it, an ionization
chamber and radiochromic films are used. It works with a peak voltage of 70 kV
and shooting times of 1 and 2 minutes. The parameter that is measured is the dose [Gy]
and it is related to the depth [mm].
The annual verification of the applicators is carried out and a new method of measure
has been designed, which has been incorporated into the University Hospital. In addition,
the data obtained will be included in the database of the Hospital system.
2022-07-19T10:30:19Z
2022-07-19T10:30:19Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29096
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/206642021-11-05T09:01:56Zcom_915_668com_915_488col_915_678
An educated review of "Quantum work statistics, Loschmidt echo and information scrambling".
González Padrón, Eduardo
Alonso Ramírez, Daniel
The formulation of quantum work statistics as a dynamical problem through the
Loschmidt echo is at the heart of this work. An introduction to each of these
concepts is presented together with the notion of information scrambling, which
extends the scope of this work to areas such as quantum chaos or even black hole
physics. Using the paper of A. Chenu et al. [1] as the guidelines, we first show
that the work statistics associated with an arbitrary driving protocol of an isolated quantum system in a generic initial state is equivalent to the Loschmidt echo
dynamics of a purified density matrix in an enlarged Hilbert space. When the initial state is thermal, the purification leads to a thermofield double state, which is
used to describe eternal black holes through the AdS/CFT correspondence, often
argued to be the fastest information scramblers. The field of quantum chaotic
systems is shown to emerge naturally from the previous content, and a full description of it in terms of Random Matrix Theory is also presented. Numerical and
analytical results are finally obtained for the quantities introduced after imposing
time-reversal symmetry in our problem, hence selecting the Gaussian Orthogonal
Ensemble as the framework within we shall take our averages.
2020-07-28T09:26:15Z
2020-07-28T09:26:15Z
2020
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/20664
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291102022-11-22T10:59:01Zcom_915_668com_915_488col_915_678
3D FS-Laser nanostructuring of YAG crystals: Laboratory experiments and a numerical design of optical waveguides
Díaz García, Inés Meili
Ródenas Seguí, Airán
Grado En Física
En este trabajo de fin de grado se ha realizado un estudio de la fabricaci´on y aplicaci
´on de estructuras fot´onicas en 3D en las escalas nanom´etrica y microm´etrica.
Concretamente, el estudio se bas´o en el concepto de cristales fot´onicos y en gu´ıas de
onda ´opticas microestructuradas como objetivo tecnol´ogico. El proyecto comienza
definiendo el marco te´orico en el que est´an basadas estas estructuras de inter´es y
sus interacciones con factores externos, como puede ser la luz. Para ello, se ha
recurrido a conceptos te´oricos de estructuras peri´odicas que definen estos objetos
fot´onicos y se ha realizado un repaso de definiciones y ecuaciones relacionadas tanto
con el ´ambito del Electromagnetismo como con el de la ´ Optica. Adem´as, se ha
estudiado en detalle la t´ecnica experimental desarrollada para la fabricaci´on de un
patr´on peri´odico con nanoporos huecos en el material mediante la irradiaci´on l´aser
de pulsos de femtosegundos. Durante este trabajo, el material utilizado fue el cristal
de YAG. (“yttrium aluminium garnet”). Para lograr esto fue necesario el uso de diversos
dispositivos tecnol´ogicos como un l´aser de femtosegundos, un “pulse-picker”,
un sistema de nanoposicionamento 3D controlado por ordenador, un microscopio
´optico, un microscopio de electrones, una m´aquina de pulido ´optico, infraestructuras
de grabado qu´ımico h´umedo y adem´as, herramientas de simulaci´on num´ericas
(software comercial del BandSOLVE).
Una vez estudiada la t´ecnica de nanolitograf´ıa para la obtenci´on de nanoporos,
se procedi´o a la elaboraci´on de experimentos con la ayuda de la estudiante de tesis
doctoral PhD, Franzette Paz Buclatin. El objetivo era llevar a cabo un estudio sobre
las caracter´ısticas de esta t´ecnica de litograf´ıa basada en una t´ecnica descubierta por
el tutor Dr. R´odenas. Por tanto, se realiz´o un an´alisis para estudiar la dependencia
de la longitud de poros y su forma y tama˜no transversal submicrom´etricos en funci´on
de diferentes par´ametros del proceso de fabricaci´on como: el ritmo de repetici´on del
l´aser, energ´ıa de pulso o velocidad de escritura del l´aser. Para alcanzar este objetivo
se hizo uso del programa de procesamiento de im´agenes ImageJ. En este ´ultimo caso
fue necesario recurrir a servicios adicionales como el SEGAI para la utilizaci´on del
microscopio electr´onico de barrido.
Adem´as, durante este proyecto, tambi´en se utiliz´o el software BandSOLVE RSOFT.
La aplicaci´on de este software permiti´o poner en pr´actica los conocimientos te´oricos
sobre estructuras peri´odicas en un sencillo marco de simulaci´on num´erica. De esta
manera, fue posible realizar una optimizaci´on de una red hexagonal con el objetivo
de obtener las caracter´ısticas m´as adecuadas para el dise˜no y estudio de una gu´ıa de
ondas fot´onica, desde el rango del UV hasta el IR en el espectro electromagn´etico. El
rango electromagn´etico de inter´es fue limitado al rango de transparencia del YAG,
desde aproximadamente 250 nm (UV) hasta 5000 nm (mid-IR), para lo cual se tuvo
siempre en cuenta la dispersi´on del ´ındice de refracci´on del cristal.
In this final degree project, a study of the fabrication and implementation of 3D photonic
structures at nanometric and micrometric scales was carried out. In particular,
the study was based around the concept of photonic crystals and microstructured
optical waveguides, as technology goal. It starts defining the theoretical framework
in which these structures of interest are based on and their interactions with external
factors, such as light. To do this, it was necessary to go to theoretical concepts
of periodic structures which define these photonic objects and a review of the Electromagnetism
and Optics definitions and equations has been done. Furthermore,
the experimental technique for fabricating a periodic pattern with hollow nanopores
in the material by means of femtosecond-pulse laser irradiation, has been studied.
During this project, the material which was used was YAG crystal (yttrium aluminium
garnet). To achieve this, it was necessary the use of different technological
devices such as a femtosecond-pulse laser, a pulse-picker, a computer controlled 3D
nanopositioning system, an optical microscope, an electron microscope, optical polishing
machine, wet-chemical etching infrastructure, and also numerical simulation
tools (commercial BandSOLVE software).
Once the nanolithography technique used for obtaining nanopores was studied, the
elaboration of experiments was carried out, also with the assistance of PhD student,
Franzette Paz Buclatin. The goal was to study the characteristics of this lithography
technique, which is based on a technique discovered by the supervisor Dr. R´odenas.
Therefore, an analysis was developed in order to study the dependence between the
pore length, and also its sub-micron cross-sectional shape and size, as a function of
some fabrication parameters such as: pulse repetition rate of the laser, pulse energy,
or laser writing speed. To achieve this objective, the use of the image processing
software ImageJ was needed. For this last task, it was necessary additional support
from SEGAI facilities with the purpose of using the scanning electron microscope
(SEM).
Also, through this project, the BandSOLVE RSOFT software was used. The implementation
of this software allowed to put into practice the theoretical knowledge
about periodic structures in an easy to use numerical simulation framework. Doing
this, it was possible to make an optimization process of an hexagonal lattice
with the purpose of obtaining the most suitable characteristics for designing and
studying a photonic waveguide, from the UV to the IR range in the electromagnetic
spectrum. The EM range of interest was limited to the transparency range of YAG,
from around 250 nm (UV) to 5000 nm (mid-IR), for which the dispersion of the
index of refraction of the crystal was always taken into account.
2022-07-19T10:32:04Z
2022-07-19T10:32:04Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29110
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/290722022-11-17T10:45:36Zcom_915_668com_915_488col_915_678
Halo mass measurements with the kinetic Sunyaev-Zel'dovich effect.
Isla Llave, Mónica Natalia
Hernández-Monteagudo, Carlos
Plan Erasmus / Sicue
In the past few years, several collaborations studying the Cosmic Microwave Background have used
the kinetic Sunyaev-Zel’dovich (kSZ) effect to measure gas mass and total mass of galaxy clusters (Calafut
et al., 2021; Vavagiakis et al., 2021). These assume the kSZ signal associated to the galaxy cluster is
entirely caused by the ionised gas inside its virial radius, dismissing the kSZ effect caused by unbound
electrons that lie next to and along the same line of sight than the clusters’. This would introduce a
bias impacting the mass estimates made from kSZ measurements. This project is aimed at quantifying
the free-electron contribution to the total kSZ signal along a line of sight towards a galaxy cluster/group
characterised by its mass and its redshift. Two methods have been employed: a semi-analytical method,
which applies linear theory and uses theoretical models from Chaves-Montero et al. (2021), Tinker et al.
(2010), and Vogelsberger et al. (2020); and a numerical method, using data from an N-body simulation at
z = 0 provided by Prof. Dr. Ra´ul Angulo. The results obtained from both are qualitatively compatible,
with the relative free-electron contribution being greater (30 − 40%) for lower mass halos (Mhalo ≲ 1013
M⊙/h), and decreasing with mass (5 − 10% for Mhalo ≳ 1015 M⊙/h). The difference between the results
obtained with the semi-analytical method and the simulation data, which is primarily seen in the growth
curve of the kSZ halo contribution as a function of halo mass, may have been caused by non-linear effects
which have been neglected in the linear theory approach this project has followed, although current efforts
are investigating more deeply the cause of this mismatch
El estudio de la estructura a gran escala del universo es una de las ramas fundamentales de
investigaci´on hoy en d´ıa en la Cosmolog´ıa f´ısica. Dentro de ella se enmarca la caracterizaci´on de c´umulos
de galaxias, que son las estructras virializadas m´as grandes conocidas, as´ı como los modelos de su
formaci´on y evoluci´on durante las distintas ´epocas del universo. Actualmente, se piensa que los c´umulos
de galaxias (y todas las zonas sobredensas del universo) tienen su origen en las fluctuaciones primordiales
de densidad del universo que crecieron a trav´es de la inestabilidad gravitacional que ocasionan, y que
est´an asociadas a las fluctuaciones cu´anticas que crecieron a tama˜no macrosc´opico durante el periodo de
inflaci´on. Por lo tanto, conocer las estad´ısticas de la poblaci´on de c´umulos de galaxias es una v´ıa para
acotar las magnitudes de los par´ametros del modelo cosmol´ogico de concordancia.
Parte de las investigaciones en el ´ambito de las anisotrop´ıas del fondo c´osmico de microondas ha
aportado una nueva ventana de investigaci´on para c´umulos de galaxias por medio de interacciones entre
la radiaci´on de fondo con la materia bari´onica. De entre los efectos causados por estas interacciones, este
proyecto se fija en el efecto Sunyaev Zel’dovich cin´etico (kSZ) (Sunyaev and Zeldovich, 1972), que es la
distorsi´on Doppler de la radiaci´on de fondo causada por el scattering Thomson entre los fotones del fondo
de microondas y el medio ionizado que se mueve con respecto a ´el a una velocidad peculiar. Este efecto
ha sido utilizado por varias colaboraciones (e.g la colaboraci´on ACTPol (Vavagiakis et al., 2021,Calafut
et al., 2021)) recientemente para inferir las masas de gas y totales de c´umulos de galaxias. En ellas se
ha observado que se asume que el flujo de kSZ a lo largo de la l´ınea de visi´on de un c´umulo proviene
´unicamente del medio intracumular. No obstante, tambi´en se espera que haya una contribuci´on a la se˜nal
de kSZ por parte del gas ionizado que se mueve con velocidad peculiar fuera del halo. El objetivo de este
trabajo es analizar esta contribuci´on proveniente del medio ionizado fuera del tama˜no virial de los halos
compar´andola con la que viene de su interior para halos de distintas masas.
Se han seguido dos m´etodos para analizar los flujos de kSZ procedentes de los halos, i.e de esferas de
radio virial, y de cilindros con apertura el radio virial y profundidad variable: un m´etodo semi-anal´ıtico,
que usa modelos te´oricos para caracterizar los campos de sobredensidad y de velocidad peculiar de los
c´umulos de galaxias, y otro basado en los datos proporcionados por Prof. Dr. Ra´ul Angulo de un
cat´alogo de halos y part´ıculas de materia oscura a redshift z = 0 obtenidos por medio de una simulaci´on
de N-part´ıculas. Para el m´etodo semi-anal´ıtico se llev´o a cabo un desarrollo te´orico dentro del marco de
la teor´ıa lineal de perturbaciones para modelar el campo de velocidades peculiares y se us´o un modelo
de sobredensidad de gas que combinaba la contribuci´on de un halo, obtenida por Chaves-Montero et al.,
2021, y la contribuci´on de los c´umulos cercanos a la l´ınea de visi´on que contribuyen al kSZ del c´umulo
observado, tambi´en llamada contribuci´on a dos halos, obtenida por medio de la funci´on de masa de halos
extra´ıda con datos de Ondaro-Mallea et al., 2022 y Tinker et al., 2010. El procedimiento seguido con
las simulaciones consisti´o en elaborar un c´odigo que seleccionase las part´ıculas de materia oscura dentro
de los vol´umenes que se quer´ıan observar (esferas de radio virial y cilindros con profundidad variable
centrados en los halos) y en comparar los flujos provenientes para cada halo de una poblaci´on de halos
con masas entre 1012 y 2 × 1015M⊙/h.
Los resultados obtenidos por ambos m´etodos muestran que para c´umulos de bajas masas (1012 −
1013M⊙/h) la contribuci´on de electrones libres al flujo de kSZ a lo largo de la l´ınea de visi´on es del
45 − 30% para la profundidad m´axima de la l´ınea de visi´on utilizada de 512 Mpc/h. Por otra parte, para
c´umulos de mayor masa virial (≳ 1015M⊙/h), esta contribuci´on de electrones libres disminuye hasta ser
10 − 5%. Los resultados de ambos m´etodos difieren cuantitativamente en cuanto al crecimiento del ratio
de kSZ proveniente de los halos con la masa del halo, pero esto se podr´ıa deber a que los datos de la
simulaci´on de N-part´ıculas utilizada no tienen en cuenta los efectos de la f´ısica bari´onica, mientras que
los perfiles de sobredensidad de Chaves-Montero et al., 2021, s´ı tienen en cuenta estos efectos. Tambi´en
es posible que se deba a procesos f´ısicos de naturaleza no lineal que afectan a la densidad y velocidad
del gas que no quedan fielmente reflejados en nuestro simplificado tratamiento lineal a primer orden
de perturbaciones. Se concluye que para masas de halos ≲ 1013M⊙ los resultados apuntan a que la
contribuci´on de los electrones fuera del halo supone entre el 30 − 40% del flujo total de kSZ a lo largo
de la l´ınea de visi´on, mientras que para masas ≳ 1015M⊙ la contribuci´on de los electrones libres cae al
∼ 10%. Se planea ampliar estos resultados en el futuro cercano utilizando simulaciones que incluyan
materia bari´onica.
2022-07-19T09:55:51Z
2022-07-19T09:55:51Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29072
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/250152021-11-05T09:02:06Zcom_915_668com_915_488col_915_678
Finding low surface brightness galaxies near NGC1042 using GNU astronomy utilities
García-Serra Romero, Andrés
Trujillo Cabrera, Ignacio
Galaxies
Satellites
Low Surface Brightness
This document describes the detection of Low Surface Brightness Galaxy (LSBG)
satellites around NGC1042 using the data obtained by the LBT Imaging of Galaxy
Halos and Tidal Structures (LIGHTS) survey. This survey has been recently proposed by a team of IAC researchers in collaboration with other institutes with the
objective of studying the low surface brightness universe for a better understanding on the behavior of galaxy stellar halos and the commonly known ”missing
satellites problem”. This last topic is the principal subject of this work. In collaboration with some members of the LIGHTS team, the project consisted of the
development of an algorithm capable of detecting these very low surface brightness objects. The document will start introducing the project and the data set
obtained, then the process of detection and categorization of the galaxies will be
explained, which will let to a final sample of these objects. Throughout the document some discussions will be made about the difficulties and challenges behind
the observation and detections of these very faint structures. For this analysis
some objects detected previously in the literature have been used as a reference.
En este documento se presentan algunas candidatas a Galaxias de Bajo Brillo
Superficial (LSBG) alrededor de NGC1042 usando datos del cartografiado LBT
Imaging of Galaxy Halos and Tidal Structures (LIGHTS). Este cartografiado ha
sido propuesto recientemente por investigadores del IAC en colaboraci´on con otras
instituciones con el objetivo principal de estudiar el universo de bajo brillo superficial para entender m´as a fondo el comportamiento de halos estelares de galaxias
y el ”problema de los sat´elites perdidos”; siendo este ´ultimo el principal tema
por el que se desasrrolla este trabajo. En colaboraci´on con algunos miembros del
equipo de LIGHTS, el trabajo ha consistido en desarrollar un algoritmo capaz de
detectar estos objetos de muy bajo brillo superficial. Se comenzar´a el documento
introduciendo el proyecto y los datos obtenidos, despu´es se desarrollar´a el proceso
de detecci´on y categorizaci´on de estas galaxias, lo que llevar´a finalmente a un
muestreo de estas. Durante el proceso se discutir´an las dificultades y retos detr´as
de la observaci´on y detecci´on de estos objetos usando en todo momento como
referencia detecciones anteriores presentes en la bibliograf´ıa.
2021-07-29T11:31:03Z
2021-07-29T11:31:03Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25015
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291212022-11-15T13:55:12Zcom_915_668com_915_488col_915_678
A primer on the study of one dimensional systems, Bethe ansatz and integrability.
Pérez Cruz, Daniel
Valiente Cifuentes, Manuel
Grado En Física
El objetivo de este trabajo es introducir y familiarizar al lector con las t´ecnicas y fundamentos del estudio de sistemas cu´anticos unidimensionales de muchos cuerpos. El an´alisis
de este tipo de sistemas comenz´o poco despu´es de la formulaci´on ondulatoria de la mec´anica
cu´antica de Schr¨odinger (1926), y uno de los pioneros en este ´area fue Hans Bethe (1931).
En su estudio del magnetismo cu´antico introdujo su famoso ansatz, el cual constituy´o la
primera soluci´on completa a un problema de muchos cuerpos en interacci´on. Su contribuci´on
pasar´ıa desapercibida hasta que, en 1963, Lieb y Liniger utilizaran las ideas desarrolladas
por Bethe para resolver el problema de N bosones en una dimensi´on interactuando a trav´es
de un potencial tipo delta de Dirac. Esto abri´o un nuevo campo de estudio tanto en la f´ısica
de sistemas cu´anticos fuertemente interactuantes como en el estudio de gases cu´anticos. La
reciente realizaci´on experimental de sistemas de este tipo ha provocado el aumento de los
esfuerzos t´ecnicos para la obtenci´on de sistemas m´as variados, as´ı como intensos avances
te´oricos para proporcionar una descripci´on m´as detallada de su din´amica. La importancia de
trabajar con sistemas unidimensionales no est´a solo en la mayor probabilidad de admitir una
soluci´on anal´ıtica, sino en los nuevos fen´omenos que es posible observar en una dimensi´on,
por ejemplo, el proceso de fermionizaci´on.
La primera parte del trabajo est´a dedicada a introducir al lector a la teor´ıa de colisiones,
describiendo los elementos b´asicos necesarios, y haciendo un especial enf´asis en el an´alisis de
problemas unidimensionales.
En la siguiente parte del trabajo introduciremos el concepto de integrabilidad. Surge
en el estudio de sistemas hamiltonianos cl´asicos y ha sido un ´area en el que los avances
se han dado, en su mayor´ıa, desde una perspectiva matem´atica. El m´etodo de Bethe y
sus generalizaciones nos proveen con t´ecnicas para determinar si un sistema cu´antico es
integrable o no. El estudio de la integrabilidad en sistemas cu´anticos se ha convertido en un
´area de intensa investigaci´on por las profundas implicaciones que tiene en la f´ısica estad´ıstica
cu´antica. Se estudia, cualitativamente, la relaci´on entre el ansatz de Bethe, la integrabilidad
del sistema y el proceso de termalizaci´on, analizando los mecanismos que permiten, o no, que
se den estos procesos.
El segundo paso, una vez introducido el ansatz de Bethe y sus propiedades ser´a usarlo
para resolver el modelo de Lieb-Liniger original, tanto para un sistema de N bosones como
para el estado fundamental del l´ımite termodin´amico. Se utilizar´an m´etodos num´ericos para
resolver las ecuaciones obtenidas en ambos casos y se discuten los resultados obtenidos. Aqu´ı
se obtiene por primera vez un indicio de la relaci´on entre el espectro de sistemas integrables
y la distribuci´on de autovalores de matrices aleatorias.
El siguiente objetivo del trabajo ser´a realizar un an´alisis similar al anterior pero para
un sistema en el que tres bosones interact´uan a trav´es de un potencial Gaussiano y no a
trav´es de una delta de Dirac. Estudios te´oricos han obtenido que el sistema sigue siendo
integrable para ciertos valores de los par´ametros del sistema. En este trabajo obtenemos
resultados num´ericos que respaldan esto, as´ı como indicios de la ruptura de la integrabilidad
para valores medio-altos de la densidad. Estas conclusiones se obtienen a partir del estudio
del espectro de este modelo modificado.
In this work we will be introducing important concepts, techniques and procedures related
to the study of one dimensional quantum many body systems. The study of this kind of
systems started shortly after Schr¨odinger’s seminal paper (1926) with the work of Hans Bethe
(1931). He introduced a novel technique, the Bethe ansatz, that would stay unrecognized
until the paradigmatic work of Lieb and Liniger (1963). They were able to solve, without any
approximation, a many body quantum system analytically, both for a finite system of bosons
as well as in the thermodynamic limit. This opened a new research field that has helped to
understand the physics of strongly correlated quantum systems as well as the dynamics of
dilute gases. Moreover, the recent experimental realization of systems of this kind has fueled
not just intense experimental efforts to reproduce more diverse one dimensional systems but
also theoretical ones. The importance of dealing with one dimensional systems is that they
exhibit exotic phenomena that are not present in 2D and 3D systems, as we shall see, in the
process of fermionization. Moreover, one dimensional systems are more prone to admit an
analytical solution, making it easier to understand the dynamics of the system.
Another topic that we will be covering is integrability. Coming from the theory of classical Hamiltonian systems it has been an elusive topic for physicists, and some advances have
been done with more mathematically-oriented purposes. Bethe’s method and its generalizations (Nested, Thermodynamic, Algebraic, Coordinate Bethe Ansatz) allow us to determine
whether a system is integrable or not, because solvability by the ansatz is a clear signature
of the system being integrable. The study of integrability in quantum systems has become
a topic of great interest because of the deep implications it has in quantum statistical mechanics. The relation between integrability and thermalization is yet under study and the
mechanisms that enable one or the other are to be discovered, although some hypothesis, as
the Eigenstate thermalization hypothesis, have been proposed.
The main motivation of this work is to serve as an introduction to this vast topic. We
begin with a brief review of scattering theory, introducing some useful results and how these
can be used to solve one dimensional problems. The next step is an overview of the problem
of thermalization: when should one expect a system to thermalize? Which mechanisms
are responsible? The link between integrability, thermalization and quantum collisions is
established here, qualitatively.
Our following task is to study Bethe’s method, initially giving a general description and
why it being a solution implies that the system is integrable. Then, we solve the problem
of a system of N bosons in a ring with delta interaction [1], both for finite N and in the
thermodynamic limit. We study the spectrum of this model and find some relations between
the distribution of Bethe’s rapidities and the spectrum of random matrices, which serve us
to introduce the concept of level repulsion and distribution of level spacing. After some
calculations we proceed to study a slightly modified model for a system of three bosons,
where we substitute the delta potential by a Gaussian. We repeat the same analysis as for
the Lieb-Liniger model and analyze the similarities and differences between them. Theoretical
research on this topic has found that the system is integrable for certain regimes. We study
the statistics of the energy spectrum for different values of the parameters of the system,
comparing the results with the integrable or non integrable expected results.
2022-07-19T11:00:50Z
2022-07-19T11:00:50Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29121
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/4302021-11-05T09:02:08Zcom_915_668com_915_488col_915_678
Síntesis y estudio estructural y espectroscópico de nanomateriales conversores de fotones
Puentes de la Muñoza, Julio
Yanes Hernández, Ángel Carlos
Física
Espectroscopía de sólidos
2014-10-09T12:50:10Z
2014-10-09T12:50:10Z
2014
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/430
es
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras
derivadas 4.0 internacional)
oai:riull.ull.es:915/106252021-11-05T09:02:26Zcom_915_668com_915_488col_915_678
Revealing dusty star-forming galaxies in a galaxy cluster in formation in the early Universe
Delgado Fumero, Armando
Roca Cortés, Teodoro
Física
2018-10-10T09:50:19Z
2018-10-10T09:50:19Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10625
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/86912021-11-05T09:02:17Zcom_915_668com_915_488col_915_678
Introducción a la teoría de cuerdas
Franchy Curbelo, Carlos
Física
2018-06-20T11:55:15Z
2018-06-20T11:55:15Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/8691
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/57892021-11-05T09:02:13Zcom_915_668com_915_488col_915_678
Bose-Einstein Condensates for Dilute Alkali Gases
Perdomo García, Andrea
Delgado Borges, Vicente
2017-07-21T13:15:15Z
2017-07-21T13:15:15Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/5789
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/257312021-11-05T09:01:50Zcom_915_668com_915_488col_915_678
A chemical abundance analysis of P-rich star candidates
Ciardella, Samuele
Masseron, Thomas Pierre
García-Hernández, Domingo Aníbal
Fosforo
Espectro estelar
Abundancia
In this memory will be analysed the work done on the spectra of 39 stars to determine if
they can be included in the group of the P-rich stars, stars whose abundance of phosphorus is
higher than average.
These stars are very important in order to explain the origin of the phosphorus found in our
galaxy, currently underpredicted by chemical evolution models. On the other hand, last year
were released an article that, analysing data from the APOGEE survey, presents the discovery
of the existence of stars with an abundance of P higher than normal, called P-rich [1]. In a
second work, were shown 39 stars with anomalies in their composition.[2] This project had as
objective to determine if these 39 stars were P-rich and if they presented a correlation between
P and other elements (O,Mg, Al,Si and Ce). To do this we used the data, provided by the
APOGEE database, of the stars studied (H-band spectra, effective temperature, surface gravity
and abundance of carbon, nitrogen and alpha elements, among others) as initial data of the
spectral synthesis code BACCHUS, which provided the abundance of phosphorus and cerium
of these stars. We found that 37 stars (∼90%) are very rich in P ([P/Fe]>=1 dex) and rich
in Ce ([Ce/Fe]>0 dex). Then, we compared the P and Ce abundances with those from other
elements as well as with those observed in other stars with similar atmospheric parameters. We
found that there is indeed the possibility of a correlation between phosphorous and oxyge, as
well between O and Si and between Al and Mg.
En esta memoria se analizar´a el trabajo realizado sobre los espectros de 39 estrellas para
determinar si pueden incluirse en el grupo de estrellas P-rich, estrellas cuya abundancia de
f´osforo es superior a la media.
Estas estrellas son muy importantes para explicar el origen del f´osforo que se encuentra en
nuestra galaxia, actualmente infraprevisto por los modelos de evoluci´on qu´ımica. Por el otro
lado, el a˜no pasado se public´o un art´ıculo que, analizando datos de la encuesta APOGEE, presenta el descubrimiento la existencia de estrellas con una abundancia de P m´as alta de lo normal,
llamada P-rica [1]. En un segundo trabajo, fueron mostradas 39 estrellas con anomal´ıas en su
composici´on. [2] Este proyecto tuvo como objetivo determinar si estas 39 estrellas eran ricas en
P y si presentaban una correlaci´on entre P y otros elementos (O,Mg, Al,Si y Ce). Para ello se
utilizaron los datos, proporcionados por la base de datos APOGEE, de las estrellas estudiadas
(espectros de banda H, temperatura efectiva, gravedad superficial y abundancia de carbono,
nitr´ogeno y elementos alfa, entre otros) como datos iniciales del c´odigo de s´ıntesis espectral
BACCHUS, que proporcion´o la abundancia de fosforo y cerio de estas estrellas. Encontramos que 37 estrellas (∼90%) son muy ricas en P ([P/Fe]>=1 dex) y ricas en Ce ([Ce/Fe]>0 dex).
Luego, comparamos las abundancias P y Ce con las de otros elementos, as´ı como con las observadas en otras estrellas con par´ametros atmosf´ericos similares. Encontramos que efectivamente
existe la posibilidad de una correlaci´on entre f´osforo y oxigeno, as´ı como entre O y Si y entre
Al y Mg.
2021-10-22T09:20:44Z
2021-10-22T09:20:44Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25731
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291062022-11-17T12:48:17Zcom_915_668com_915_488col_915_678
Análisis comparativo del concepto de la luz en los currículos de secundaria español y estadounidense
Pérez Fiel, Fernando
Eff-Darwich Peña, Antonio Manuel
Alonso Ramírez, Daniel
Grado En Física
Cada vez son más los estudiantes que quieren terminar su formación académica en el
extranjero, pero, ¿adquiere los mismos conocimientos en física un estudiante en España y en
Estados Unidos a lo largo de su etapa de educación secundaria? ¿Hasta qué punto puede
suponer una alteración en el nivel conceptual, en el campo de las ciencias cambiar de sistema
educativo? El presente documento intentará dar respuestas a estas preguntas. Para ello, una
vez fijada la etapa académica, el segundo ciclo de la educación secundaria en España, High
School en Estados Unidos realizaremos una comparación de currículos sobre la Luz que se
imparten en los centros de cada país. Dicha comparación se centrará en los objetivos
académicos de la Luz que aparecen en la normativa que rige a cada uno de los países, la Ley
Orgánica de la Educación realizada por el Ministerio de Educación y el Next Gereration Science
Standars (NGSS) en los cuáles se describen los conocimientos que deberían de haber
alcanzado los estudiantes una vez hayan finalizado la etapa educativa.
Does a student acquire the same knowledge in physics and specifically in the subject
of Light in Spain and in the United States throughout high school? Is it possible for a Spanish
student to travel to the US to complete their academic stage or vice versa without altering
their educational level in the field of science? This document will attempt to provide answers
to these questions. In order to do this, once the academic stage has been established, the
second cycle of secondary education in Spain, High School in the United States, we will we will
execute a comparison of curricula on Light that are taught in centers of each country. This
comparison will focus on the academic objectives of the Light that appear in the regulations
that govern each of the countries, the Organic Law of Education implemented by the Ministry
of Education and the Next Gereration Science Standards (NGSS) in which it is described the
knowledge that students should have achieved once they have finished the educational stage.
2022-07-19T10:31:40Z
2022-07-19T10:31:40Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29106
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/61902021-11-05T09:02:17Zcom_915_668com_915_488col_915_678
Estudio de la distribución espectral de energía del remanente de supernova IC443 con datos de Quijote
Carro Portos, Pablo
Génova Santos, Ricardo T.
2017-09-26T08:46:24Z
2017-09-26T08:46:24Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/6190
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/291092022-11-22T10:51:37Zcom_915_668com_915_488col_915_678
Estudio mecano-cuántico del Nitruro de Galio desde primeros principios
Lorenzo Domínguez, María
Muñoz González, Alfonso
Grado En Física
For the last few decades quantum-mechanical studies of materials have contributed to
great advances in materials science and its applications, facilitating the obtention of results
in a less complex way than experimentally. In this report Galium Nitride has been studied
in order to understand its behavior when being affected by high pressures.
To perform the ab initio simulation of the Gallium Nitride it is needed to explain the
theoretical background in which it is based. As it is in the quantum mechanical frame,
the total energy Hamiltonian must be solved, but to do so, it is necessary to bring out some
approximations. The first one is the Born-Oppenheimer or adiabatic approximation, in which
the problem is reduced to a system of electron in a frozen-in configuration of the nucleus. [1]
After that, the density functional theory is considered, which stablishes that the system is
formed by non-interacting electrons in an external potential and that the energy of the system
is a functional of the density of states. This theory was proposed by Kohn & Hohenberg,
who also described the energy functional as a sum of two contributions: one related to the
external potential and the other one related to a universal functional. The form of the last
one was given by Kohn & Shan. They established that the functional was composed of the
kinetic energy of the electrons and the exchange-correlation energy. They also proposed a set
of equations to be solved by self-consistent methods that can be used to obtain the density
of states and the external potential that solve the energy equation.
By obtaining the total energy of the system, it is possible to obtain other variables and
parameters of it, such as the pressure, the enthalpy, structural parameters, etc. The energies
and volumes obtained by the ab initio simulation can be adjusted to the Birch-Murnaghan
equation of state. By doing that, the equilibrium constants can be obtained.
In this work, the ab initio simulation was performed to study the evolution of the GaN
at high pressures, where a transition phase can be observed. This transition takes place
from a wurtzite phase to the rock salt, at 45 GPa, according to the calculations of this work
carried out with VASP (Vienna Ab initio Simulation Package). The wurtzite phase has two
structural parameters, c/a and u. At equilibrium, the obtained values were a0 = 3.179˚A and
c0 = 5.179˚A, which are in accordance with the bibliography. The rock-salt structure only
has one parameter, a0 = 4.217˚A. The other equilibrium constants are collected in the tables
2 and 3.
It is also possible to calculate the electronical band structures for both phases, in which
a band gap of 1.63 eV was obtained for the wurtzite (at p=0) , and one of 1.21 eV for the
rock-salt at the transition pressure. The values are smaller than the experimental ones as the
use of GGA underestimates the energy gap.
All these calculations were done using an energy cutoff of 600 eV and Rk = 24 for the
construction of the k-points grid.
2022-07-19T10:31:56Z
2022-07-19T10:31:56Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29109
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/71332021-11-05T09:02:20Zcom_915_668com_915_488col_915_678
Estudio de la estructura termodinámica de la baja troposfera en Canarias bajo la influencia de las invasiones de aire sahariano
Oramas Rodríguez, José Carlos
Guerra García, Juan Carlos
Física
2018-03-19T14:00:16Z
2018-03-19T14:00:16Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/7133
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial- Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/250102021-11-05T09:02:05Zcom_915_668com_915_488col_915_678
A pilot study to test the reliability of OII model atoms with stellar spectroscopy
Avellaneda González, José Andrés
Simón-díaz, Sergio
Osorio, Yeisson
La espectroscop´ıa cuantitativa puede ser definida como la disciplina que permite inferir par´ametros
f´ısicos a partir de la aplicaci´on de herramientas de an´alisis espectrosc´opico a un espectro observado.
Actualmente existe una gran cantidad de espectros de alta calidad que permiten el estudio detallado
de las propiedades f´ısicas de las estrellas en distintas ventanas espectrales. Para el estudio de las
estrellas el espectro observado no es la ´unica herramienta necesaria, el an´alisis cuantitativo requiere
de un marco te´orico sobre el cual comparar las observaciones e inferir par´ametros f´ısicos. En la
astrof´ısica moderna este marco te´orico es conocido como transporte radiativo y se materializa de
manera pr´actica mediante los denominados modelos de atm´osfera.
Los modelos de atm´osfera permiten resolver el problema del transporte radiativo en la estrella con
el objetivo de crear un espectro estelar sint´etico. Es decir, permiten deducir la forma del espectro que emana del estrella y es medido por nuestros telescopios. Adem´as de consideraciones con
respecto a ciertos aspectos macrosc´opicos de la estrella (referentes tanto a la geometr´ıa considerada
en el modelado como a los par´ametros f´ısicos fundamentales que caracterizan a la misma). Los
mod´elos de atm´osfera necesitan m´odelos at´omicos fiables para representar la interacci´on radiaci´onmateria que sucede en la estrella. El objetivo de este trabajo es el de testear dos m´odelos at´omicos
de O ii construidos con el paquete computacional maKe Atoms Simple (KAS) desarrollado por
Yeisson Osorio. Las comparaci´on se realizar´a a trav´es del c´alculo de la abundancia de ox´ıgeno
en la estrella BD+463474 usando el m´etodo de la curva de crecimiento. Las observaciones (cuyo
an´alisis espectrosc´opico fue ya presentado en Garc´ıa-Rojas et al., 2014) fueron realizadas con el
espectr´ografo de alta resoluci´on FIES montado en el telescopio NOT en el observatorio del Roque
de los Muchachos el 10 de septiembre de 2012. Esta estrella, ubicada en la nebulosa del Capullo,
es una buena candidata para el testeo de modelos at´omicos debido a su baja rotaci´on y par´ametros
estelares favorables.
Con el objetivo de comparar los modelos at´omicos, dos redes de espectros sint´eticos (en los que,
fijadas la temperatura efectiva y la gravedad superficial de la estrella, se dejaron variar la abundancia de ox´ıgeno y la microturbulencia) fueron calculados. Para el c´aculo de los espectros se utiliz´o
el programa de c´alculo de atm´osferas estelares llamado TLUSTY, el cual fue alimentado con cada
modelo at´omico para el c´alculo de cada respectiva red. En este sentido en la secci´on 4 se da un
peque˜no acercamiento a la construcci´on de modelos at´omicos. As´ı mismo, en esta secci´on se realizar´a una revisi´on detallada de las principales diferencias entre los dos modelos at´omicos utilizados
en nuestro estudio. Cabe destacar, tambi´en, que uno de los objetivos de este trabajo fue desarrollar un paquete computacional en IDL con la finalidad de compatibilizar la salida del programa de
c´alculo de espectros sint´eticos con un programa de c´alculo de abundancias desarrollado por Sergio
Sim´on-D´ıaz.
Este trabajo se inici´o con la selecci´on de una lista preliminar de 46 l´ıneas de absorci´on de O ii.
Para la selecci´on se parti´o de una lista de l´ıneas proporcionada por el sistema KAS. De donde se
fueron descartando l´ıneas por motivos de contaminaci´on, blending, dificultad para medir su anchura
equivalente o problemas en el espectro sint´etico hasta llegar a la lista preliminar de 46 l´ıneas de
O ii. La metodolog´ıa utilizada para la selecci´on y c´alculo de contaminaci´on de las l´ıneas se detalla
en la secci´on 5. En esta secci´on se desarrolla tambi´en el m´etodo de la curva de crecimiento y el
efecto que tienen los distintos par´ametros estelares en el resultado del c´alculo de abundancias.
Nuestros an´alisis arrojaron que los modelos utilizados dan resultados en concordancia con los derivados por Garc´ıa-Rojas et al. (2014). Luego de comprobar que los modelos pueden, de manera
global, llevar un an´alisis completo de abundancias procedimos a analizar el set de l´ıneas dividido
por multipletes. Este acercamiento nos permiti´o analizar la sensibilidad de cada multiplete con
respecto a cambios en el m´odelo at´omico, as´ı como tambi´en descubrir problemas en la informaci´on
at´omica usada para calcular los espectros sint´eticos. El estudio por multipletes logr´o demostrar
principalmente que: el grado de sensibilidad con respecto a cambios en el modelo at´omico depende
del multiplete, las l´ıneas m´as fuertes parecen demostrar mayor sensibilidad que las m´as debiles, existen deficiencias en algunos valores de la informaci´on at´omica considerada y que la sensibilidad a
cambios en los modelos parece ser menor a microturbulencias mayores. Por ´ultimo nuestros an´alisis
permitieron desarrollar una metodolog´ıa que puede ser aplicable para otras estrellas de par´ametros
estelares similares con otras especies qu´ımicas y con un n´umero mayor de modelos at´omicos. En la secci´on 6 damos una revisi´on profunda de los resultados m´as importantes de nuestro an´alisis.
Mientras que, en la secci´on 7, se desarrollan las conclusiones finales y los futuros curso de acci´on
a tomar partiendo de la metodolog´ıa desarrollada.
2021-07-29T11:30:24Z
2021-07-29T11:30:24Z
2021
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/25010
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/96132021-11-05T09:02:25Zcom_915_668com_915_488col_915_678
Efecto Casimir
Martín Gutiérrez, Alejandro
Alonso Ramírez, Daniel
Física
2018-07-19T12:45:05Z
2018-07-19T12:45:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9613
en
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/157272021-11-05T09:02:27Zcom_915_668com_915_488col_915_678
Estudio del borrador cuántico con interferómetro
Pérez García, Ámbar
Sala Mayato, Rafael Francisco
borrador cuántico
inerteferómetro
Mach-Zehnder
Este trabajo de fin de grado surge de la motivación de realizar un estudio dual en el que
se complementen un desarrollo teórico y su correspondiente comprobación experimental.
Para ello, se ha planteado el análisis de un borrador cuántico empleando un interferómetro
de Mach-Zehnder, con el fin de reproducir la validez de las experiencias predichas por la
física cuántica con el experimento del laboratorio.
La estrategia seguida ha consistido en una revisión bibliográfica que permita seleccionar
la documentación más relevante, seguida del montaje experimental para realizar una serie
de pruebas cualitativas sobre su funcionamiento. Se debe tener en cuenta que durante todo el trabajo se están empleando simultáneamente dos teorías, la mecánica cuántica y el
electromagnetismo clásico.
Es de gran interés realizar esta tarea ya que supone acercar, de una forma sencilla y
visual, ciertos fenómenos de la física cuántica difíciles de comprender por ser muy anti intuitivos, como la complementariedad o la no-localidad. Para ello, se han descrito desde el
formalismo de la física cuántica varios esquemas del borrado cuántico haciendo uso del interferómetro de Mach-Zehnder, facilitando su comprensión con diversos gráficos y diagramas
descriptivos. Así mismo, se incluyen fotografías del instrumento empleado en el laboratorio.
Del análisis expuesto a lo largo de la memoria, se pudieron constatar tres aspectos fundamentales de la teoría cuántica. En primer lugar, la dualidad onda-corpúsculo está presente
en todo momento ya que se comprueba que el carácter de la partícula depende de la capacidad que tenga el experimentador de acceder a la información del recorrido que ha realizado.
Cuando los caminos son indistinguibles se puede observar el patrón de interferencia consecuencia del comportamiento ondulatorio de las partículas. La función de onda se dividirá
en dos y cada parte recorrerá uno de los brazos, produciéndose la interferencia entre las
mismas a la salida del interferómetro. Si se marcan los caminos empleando polarizadores, la
partícula solo podrá ir por uno de ellos, pasando a comportarse como un corpúsculo.
En segundo lugar, se ha constatado el principio de complementariedad de Bohr. Las
propiedades complementarias no pueden ser medidas u observadas simultáneamente, en
este caso, las líneas de interferencia y el recorrido del fotón. Al colocar los polarizadores se
está generando un estado entrelazado entre el camino seguido por el fotón y su polarización.
Bajo esta circunstancia, conocer la polarización del fotón es conocer el recorrido que ha
realizado, y por lo tanto, su comportamiento debe ser corpuscular, desapareciendo el patrón
de interferencia.
En tercer lugar, si se dejara de tener acceso a la información del camino seguido por las
partículas, instantáneamente se recuperaría el carácter ondulatorio de los fotones, esta es
una manifestación del carácter no-local de la mecánica cuántica.
Finalmente, el objetivo principal se ha alcanzado: montar un interferómetro de MachZehnder que cumpla los requisitos para comportarse como un borrador cuántico. Queda
pendiente la ampliación del proyecto para emplear el borrador cuántico con interferómetro
de Mach-Zehnder para interpretar las medidas de elección retardada
This end-of-degree work arises from the motivation to carry out a dual study in which a
theoretical development and its corresponding experimental verification are complemented.
For this purpose, the analysis of a quantum eraser using a Mach-Zehnder interferometer
in order to reproduce the experiences predicted by quantum physics with the laboratory
experiment.
The strategy followed has consisted of a bibliographic review that allows the selection of
the most relevant documentation, followed by the realization of the experimental assembly
to begin with a series of qualitative tests on its functioning. It should be borne in mind that
during the entire work two theories are being used simultaneously, quantum mechanics and
classical electromagnetism.
It is of great interest to carry out this task since it involves bringing closer, in a simple
and visual way, certain phenomena of quantum physics that are difficult to understand
because they are quite unintuitive, such as complementarity or non-locality. To this end,
several schemes of quantum erasure have been described from the formalism of quantum
physics using the Mach-Zehnder interferometer, making it easier to understand with different
graphics and descriptive diagrams. It also includes photographs of the instrument used in
the laboratory, together with explanations of the tests carried out.
From the analysis exposed throughout the report, three fundamental aspects of quantum
theory could be observed. Firstly, the wave-particle duality is present at all times as it is
proven that the character of the particle depends on the capability of the experimenter to
access the information of the path he has taken. When the paths are indistinguishable, the
interference pattern can be observed as a consequence of the undulatory behavior of the
particles. The wave function will be divided in two and each part will run through one of
the arms, producing the interference between them at the output of the interferometer. If
the paths are marked using polarizers, the particle will only be able to go through one of
them, going on to behave like a corpuscle.
Secondly, Bohr’s principle of complementarity has been confirmed. The complementary
properties cannot be measured or observed simultaneously, in this case, the lines of interference and the path of the photon. When the polarizers are placed, a entangle state is
generated between the path followed by the photon and its polarization. Under this circumstance, to know the polarization of the photon is to know the path it has taken, and
therefore, its behavior must be corpuscular, disappearing the interference pattern.
Thirdly, if we stopped having access to the information on the path followed by the
particles, the undulatory character of the photons would be instantly recovered, this is a
manifestation of the non-local character of quantum mechanics.
Finally, the main objective has been achieved: to elaborate a Mach-Zehnder interferometer that meets the requirements to behave like a quantum eraser. Pending the extension
of the project to use the quantum eraser with Mach-Zehnder interferometer to interpret the
delayed choice measurements.
2019-07-26T10:40:04Z
2019-07-26T10:40:04Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/15727
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/146422021-11-26T09:35:15Zcom_915_668com_915_488col_915_678
Modelos cosmológicos alternativos al modelo concordante de materia y energía oscuras
Vos Ginés, Bernhard
Cepa Nogue, Jorge
Dark matter
Dark energy
Throughout the history of physics different conceptions of the Universe of which we are part
have existed and coexisted. The development the extension of the human senses provided by
the modern instrumentation has allowed us to capture an increasingly realistic and humble
vision of the cosmos. The so-called “concordance model” corresponds to the ΛCDM
cosmological model supported by General Relativity in the macroscopic world and the
standard model of particle physics in the microscopic world. The problem of unification
between these two great theories is not the subject of this final degree project.
This cosmological model, as indicated by its acronym, denotes a Universe composed of two
additional components to the baryonic matter: dark matter and dark energy. However, its
nature is unknown and its existence cannot be firmly established. In this context, numerous
reinterpretations of the observations that were used to postulate the dark matter and energy
hypothesis arise, with the desire to be validated experimentally in the future. In this work,
some of the most important models that have arisen as alternatives are considered: the
negative mass model, the MOND theories, the 𝑓(𝑅) theories, the Chaplygin gas model or the
entropic gravity model. Post-Newtonian parametrization or angular redshift fluctuations are
also mentioned as observational constraints of new models that could help discarding some
theories and supporting others.
A lo largo de la historia de la física han existido y coexistido diferentes concepciones del
Universo del que formamos parte. La extensión de los sentidos humanos que
proporciona la instrumentación moderna ha permitido plasmar en nosotros una visión
cada vez más realista y humilde del cosmos. El “modelo concordante” actual
corresponde al modelo cosmológico ΛCDM sustentado por la Relatividad General en el
mundo macroscópico y al modelo estándar de la física de partículas en el mundo
microscópico. El problema de unificación existente entre estos dos grandes teorías no es
objeto de este trabajo de fin de grado.
El modelo cosmológico, tal y como indican sus siglas, denota un Universo compuesto por
dos componentes adicionales a la materia bariónica: la materia oscura y la energía
oscura. Sin embargo se desconoce su naturaleza y no se puede asegurar con firmeza su
existencia. En este contexto surgen numerosas reinterpretaciones de las observaciones
que postulaban la materia y energía oscuras, con el afán de poder ser validadas
experimentalmente en un futuro. En este trabajo se consideran algunas de los modelos
más importantes que han surgido como alternativas: el modelo de masa negativa, las
teorías MOND, las teorías 𝑓(𝑅), el modelo de gas de Chaplygin o el modelo de gravedad
entrópica. También se mencionan la parametrización postnewtoniana o las
fluctuaciones angulares de redshift como restricciones observacionales a los nuevos
modelos que podrían ayudar a descartar algunas teorías y respaldar otras.
2019-06-26T11:35:14Z
2019-06-26T11:35:14Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/14642
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/290842022-07-21T10:48:04Zcom_915_668com_915_488col_915_678
Potential Energy Surface of H3- using Atom-Bond pairwise additive scheme
García De Lamo, Elena
Bretón Peña, José Diego
Hernández Rojas, Javier
Grado En Física
PES
Atom-Bond
Ab inito
2022-07-19T09:57:26Z
2022-07-19T09:57:26Z
19/07/2022 10:52
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29084
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/6102021-11-05T09:02:08Zcom_915_668com_915_488col_915_678
Measurements and analysis of optical properties of materials with technological interest
Labrador Páez, Lucía
Martín Benenzuela, Inocencio Rafael
Física
Espectroscopía de emisión
Óptica
2014-10-28T12:40:0
2014-10-28T12:40:05Z
2014
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/610
es
en
Licencia Creative Commons (ReconocimientoNo comercialSin obras
derivadas 4.0 internacional)
oai:riull.ull.es:915/57922021-11-05T09:02:10Zcom_915_668com_915_488col_915_678
Bioacústica Física: La parametrización de las ecuaciones para simular el canto de los pájaros.
Rodríguez Beltrán, Pablo
Rosa González, Fernando Luis
2017-07-21T13:15:29Z
2017-07-21T13:15:29Z
2017
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/5792
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/96232021-11-05T09:02:24Zcom_915_668com_915_488col_915_678
Estudio y caracterización del anticiclón de las Azores
Hernández León, Víctor
Expósito González, Francisco Javier
Física
2018-07-20T08:20:05Z
2018-07-20T08:20:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/9623
es
----- https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
oai:riull.ull.es:915/162532021-11-05T09:01:49Zcom_915_668com_915_488col_915_678
Caracterización de la emisión de microondas en M31 usando
nuevos datos del Sardinia Radio Telescope
Pérez Toledo, Fabricio Manuel
Génova Santos, Ricardo T.
Battistelli, Elia
In this study, we have performed the correlation analysis between the microwaves
maps and both, the far - mid infrared maps and the parameters maps in the Andromeda
galaxy. For this aim, we have used intensity maps from the Sardinia Radio Telescope
and public data from both the Herschel Space Observatory and Spitzer Space Telescope. The parameters maps used were created by combining public data and models
produced by DustEM. There are four sections: the introduction, the methodology, the
results and the conclusions. In the first section, it’s established the basic knowledge.
We have begun by explaining what interstellar medium (ISM) is as well as the phenomena that take place in its inner. Those include: the temperature, composition and
relative abundance for different dust grain types which are found in the medium. Next,
we present various emission mechanisms focusing on the anomalous microwave emission. Later on, we gather up and display information about the Andromeda galaxy,
explaining its suitability to be our object of study, justifying the reason for us to choose
it over other galaxies.
In the second one, we have explained the basic characteristics of the radio telescope
that we have used for the C-band and K-band observations. This is followed by the
observation, planning and measurement strategies of the calibration sources. Then, we
have described the treatment carried out on the observations, as well as the models
obtained through DustEM and the archive maps that we have selected. In the last
part of this sextion we have detailed in depth the process to adjust the archive maps
to our microwave maps. To close this section, we have added an explanation on how
the correlations between maps have been made, as well as the procedure followed using
both the DustEM and the infrared maps in order to obtain the parameter maps.
In the third section - which can be divided into two smaller blocks - the results of
this study are presented. We have started the first one by finding out the correlation
which exists between both microwave and infrared maps. This has been done having
the adjusted maps as references. We have obtained, in most of the results of the
correlations, values of the Pearson’s coefficient around 0.6 (for the ring’s region), 0.4
(for the disc’s area) and 0.2 (for the nucleus). However, for the nucleus we have got
other values in the maps of 24 and 100 m, for which the values have been around 0.7.
In the second one, the correlation between the SRT and both the BG and ISRF map
have shown values for the coefficient of Pearson around 0.7 in the ring and 0.5 in the
disc. For the nucleus in the ISRF, the value has been of 0.7 (being the same as it was
for the ring), but the nucleus hasn’t show any correlation with the BG. For the VSG
map we have found a correlation of around 0.4 with the disc.
Finally, in the fourth and last section, we have resolved that the correlation between
the SRT and parameters maps indicate that the emission of microwaves is mainly
related with the intensity of the radiation field, with the abundance of BG and with
the dust’s temperature. Regarding the relation between the abundance of VSG and
the microwave emission, it has only been noticed in the disc. This results are coherent
with the ones obtained in the work of Tibbs et al. (2012). On the other hand, the
results for the dust’s species haven’t been conclusive, due to the results may have been
affected by diverse factors. Therefore, it’s concluded that, for improving the outcomes,
it would be required that we rose the range of values (used for the dust’s species), used
another adjustment technique or a more accurate and precise model.
En este trabajo se ha realizado el an´alisis de la correlaci´on entre mapas de microondas combinados con mapas de archivo para infrarrojo lejano y medio, y mapas de
par´ametros para la galaxia de Andr´omeda. Se han usado mapas en intensidad procedentes del Sardinia Radio Telescope y datos p´ublicos procedentes de Herschel Space
Observatory y Spitzer Space Telescope. Los mapas de par´ametros usados se han creado
a partir de la informaci´on obtenida al combinar los mapas de archivo y con los modelos
generados por el c´odigo DustEM. En la primera secci´on se establece los conocimientos
b´asicos. Comenzamos explicando en qu´e consiste el medio interestelar (ISM de sus siglas en ingl´es) as´ı como los fen´omenos que se manifiestan en su interior, la diversidad
de medios que encontramos asociados a unos valores de temperatura determinados y
composici´on, o las abundancias de distintas especies de polvo. Luego presentamos los
diversos mecanismos de emisi´on presentes en el ISM centr´andonos en los mecanismos
de emisi´on propuestos para explicar la emisi´on an´omala de microondas (AME de sus siglas en ingl´es). A continuaci´on exponemos informaci´on sobre la galaxia de Andr´omeda
explicando la idoneidad para ser nuestro objeto de estudio.
En la segunda secci´on, explicamos las caracter´ısticas b´asicas el radiotelescopio que
usamos para las observaciones en banda C y banda K, seguido de las estrategia de
observaci´on, planificaci´on y medici´on de las fuentes de calibraci´on. Luego, hablaremos
sobre el tratamiento realizado sobre las observaciones as´ı como los mapas de archivo
que hemos seleccionado y sobre el DustEM. En la ´ultima parte de este cap´ıtulo detallaremos con profundidad el proceso para ajustar los mapas de archivo a los mapas en
microondas. Para cerrar este cap´ıtulo, se explicar´a c´omo se han realizado las correlaciones entre mapas y el procedimiento seguido empleando DustEM y los mapas en el
infrarrojo para obtener los mapas de par´ametros.
En la tercer secci´on, se presentan los resultados de este estudio. Partiendo de mapas
ajustados, comenzaremos la primera parte del cap´ıtulo estimando la correlaci´on entre
los mapas en microondas y los mapas en infrarrojo. Obteniendo en la mayor´ıa de las
correlaciones valores del coeficiente de Pearson en torno a 0.6 para la regi´on de anillo,
sobre 0.4 para la regi´on de disco y valores en torno a 0.2 para el n´ucleo, excepto para
los mapas de 24 y 100 m. En la segunda parte de esta secci´on, la correlaci´on de los
mapas de SRT con el mapa de BG y ISRF muestran valores del coeficiente de Pearson
en torno a 0.7 en regiones como el anillo y el n´ucleo, excepto BG que no muestra
correlaci´on con el n´ucleo, y 0.5 en el disco. Para el mapa de VSG encontramos una
correlaci´on en torno a 0.4 con el disco.
Finalmente, en la cuarta secci´on, se concluye que la correlaci´on entre los mapas
de SRT y los mapas de par´ametros indican que la emisi´on de microondas se relaciona
principalmente con la intensidad del campo de radiaci´on, con la abundancia de BG y
con la temperatura del polvo. Respecto a la relaci´on entre la abundancia de VSG y la
emisi´on en microondas s´olo es observada en el disco. Estos resultados son coherentes
con los obtenidos en Tibbs et al. (2012). En cambio, los resultados para las especies
de polvo no son concluyentes debido a diversos factores que han podido perjudicar los
resultados. Por lo tanto se concluye que para mejorar los resultados se debe usar un
modelo m´as preciso, aumentar el rango de valores usados para las especies de polvo o
usar otra t´ecnica de ajuste.
2019-10-02T13:30:18Z
2019-10-02T13:30:18Z
2019
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/16253
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/291042022-11-17T12:39:42Zcom_915_668com_915_488col_915_678
Introducción a la espectropolarimetría solar
Bonilla Mariana, Iván
Ruiz Cobo, Basilio
Grado En Física
The first part of this report contains an introduction to what the Sun is like.
It reviews both the structure of the Sun (using the Standard model, it is formed
by 7 layers: the core, the radiative zone, the convective zone, the photosphere,
the chromosphere, the solar corona and the heliosphere) and the existence of
magnetic structures like the solar flares and the coronal mass ejections.
Later, the radiation field and the magnitudes that define it are described, as
well as the radiative transfer equation (RTE), and the main approximations
that usually are applied.
Then we talk about the mechanisms of formation of spectral lines and the
broadenings mechanisms, responsible of its shape which is a Voigt profile (or
function) resulting from three mechanisms: the natural broadening (which can
be ignored), the Doppler broadening (the most important) and the collisional
broadening. Additionally we speak about the polarization of light. For that the
Stokes parameters are presented and the radiative transfer equation for Stokes
parameters is written.
Later we see the Zeeman effect and how this effect produces polarization. Then
we describe the different approaches that are made to be able to solve the RTE,
among them, the most important for this work is the Milne-Eddington approximation in which, in order to obtain an analytical solution of the RTE in the case
of polarized light, it is imposed that the absorption matrix is constant with the
optical depth (which implies that all magnitudes will be constant with depth,
such as the magnetic field vector, the component of the velocity along the line of
sight and also the parameters that define the line, such as its intensity, Doppler
width and damping). Furthermore, it is necessary to assume that the source function (the ratio of emission to absorption) is a linear function of optical depth.
Making this approximation we get an analytical solution of the Stokes parameters (the Unno-Rachkovsky equations) as a function of the parameters that
define the line (such as the wavelength or the quantum numbers of the levels
involved in the bound bound transition) and of 9 free parameters: the two that
define the source function; the three that define the magnetic field; the one that
defines the velocity along the line of sight; the one that defines the strength of
the line; and finally the two that define the broadening of the line.
In this work, a python program has been designed and written that allows us
calculating the profiles of the Stokes parameters once the parameters that define
the spectral line are known, for any set of values of the 9 free parameters. It has been verified that the program has no errors and then the behavior of the
spectral lines has been studied by varying each of these 9 parameters independently.
In the final part of the memory, some data observed by the Hinode satellite
in a spot near the center of the Sun has been read and represented. The data
consists on a cube of 512x512 pixels observed in the 4 Stokes parameters along
112 wavelengths that include two spectral lines of Fe I around 6300 A.
Finally, some of these pixels have been chosen and their Stokes spectra have
been compared with those synthesized with our program.
2022-07-19T10:31:24Z
2022-07-19T10:31:24Z
2022
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/29104
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 Internacional)
oai:riull.ull.es:915/106272021-11-05T09:02:25Zcom_915_668com_915_488col_915_678
Estudio detallado de algunos efectos potencialmente relevantes para la determinación de espectros de materia oscura
Pérez Pérez, Bárbara
Betancort Rijo, Juan Eugenio
Física
2018-10-10T10:10:05Z
2018-10-10T10:10:05Z
2018
info:eu-repo/semantics/bachelorThesis
http://riull.ull.es/xmlui/handle/915/10627
es
https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es_ES
Licencia Creative Commons (Reconocimiento-No comercial-Sin obras derivadas 4.0 internacional)
rdf///col_915_678/100