首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1857篇
  免费   88篇
社会科学   1945篇
  2024年   5篇
  2023年   34篇
  2022年   16篇
  2021年   28篇
  2020年   59篇
  2019年   94篇
  2018年   99篇
  2017年   75篇
  2016年   115篇
  2015年   67篇
  2014年   95篇
  2013年   252篇
  2012年   88篇
  2011年   87篇
  2010年   53篇
  2009年   65篇
  2008年   68篇
  2007年   66篇
  2006年   43篇
  2005年   58篇
  2004年   55篇
  2003年   44篇
  2002年   46篇
  2001年   46篇
  2000年   18篇
  1999年   28篇
  1998年   25篇
  1997年   19篇
  1996年   19篇
  1995年   17篇
  1994年   16篇
  1993年   12篇
  1992年   13篇
  1991年   13篇
  1990年   13篇
  1989年   9篇
  1988年   5篇
  1987年   6篇
  1986年   11篇
  1985年   11篇
  1984年   4篇
  1983年   4篇
  1982年   5篇
  1981年   4篇
  1980年   4篇
  1979年   8篇
  1977年   3篇
  1976年   4篇
  1975年   3篇
  1974年   3篇
排序方式: 共有1945条查询结果,搜索用时 265 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
In recent years, the Dutch healthcare sector has been confronted with increased competition. Not only are financial resources scarce, Dutch hospitals also need to compete with other hospitals in the same geographic area to attract and retain talented employees due to considerable labour shortages. However, four hospitals operating in the same region are cooperating to cope with these shortages by developing a joint Talent Management Pool. ‘Coopetiton’ is a concept used for simultaneous cooperation and competition. In this paper, a case study is performed in order to enhance our understanding of coopetition. Among other things, the findings suggest that perceptions of organizational actors on competition differ and might hinder cooperative innovation with competitors, while perceived shared problems and resource constraints stimulate coopetition. We reflect on the current coopetition literature in light of the research findings, which have implications for future research on this topic.  相似文献   
3.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
4.
5.
The body is the empirical quintessence of the self. Because selfhood is symbolic, embodiment represents the personification and materialization of otherwise invisible qualities of personhood. The body and experiences of embodiment are central to our sense of being, who we think we are, and what others attribute to us. What happens, then, when one's body is humiliating? How does the self handle the implications of a gruesome body? How do people manage selfhood in light of grotesque physical appearances? This study explores these questions in the experiences of dying cancer patients and seeks to better understand relationships among body, self, and situated social interaction.  相似文献   
6.
7.
Two habituation experiments were conducted to investigate how 4‐month‐old infants perceive partly occluded shapes. In the first experiment, we presented a simple, partly occluded shape to the infants until habituation was reached. Then we showed either a probable completion (one that would be predicted on the basis of both local and global cues) or an improbable completion. Longer looking times were found for the improbably completed shape (compared to probable and control conditions), suggesting that the probable shape was perceived during partial occlusion. In the second experiment, infants were habituated to more ambiguous partly occluded shapes, where local and global cues would result in different completions. For adults, the percept of these shapes is usually dominated by global influences. However, after habituation the infants looked longer at the globally completed shapes. These results suggest that by the age of 4 months, infants are able to infer the perceptual completion of partly occluded shapes, but for more ambiguous shapes, this completion seems to be dominated by local influences.  相似文献   
8.
Summary.  Social data often contain missing information. The problem is inevitably severe when analysing historical data. Conventionally, researchers analyse complete records only. Listwise deletion not only reduces the effective sample size but also may result in biased estimation, depending on the missingness mechanism. We analyse household types by using population registers from ancient China (618–907 AD) by comparing a simple classification, a latent class model of the complete data and a latent class model of the complete and partially missing data assuming four types of ignorable and non-ignorable missingness mechanisms. The findings show that either a frequency classification or a latent class analysis using the complete records only yielded biased estimates and incorrect conclusions in the presence of partially missing data of a non-ignorable mechanism. Although simply assuming ignorable or non-ignorable missing data produced consistently similarly higher estimates of the proportion of complex households, a specification of the relationship between the latent variable and the degree of missingness by a row effect uniform association model helped to capture the missingness mechanism better and improved the model fit.  相似文献   
9.
10.
We study the properties of the quasi-maximum likelihood estimator (QMLE) and related test statistics in dynamic models that jointly parameterize conditional means and conditional covariances, when a normal log-likelihood os maximized but the assumption of normality is violated. Because the score of the normal log-likelihood has the martingale difference property when the forst two conditional moments are correctly specified, the QMLE is generally Consistent and has a limiting normal destribution. We provide easily computable formulas for asymptotic standard errors that are valid under nonnormality. Further, we show how robust LM tests for the adequacy of the jointly parameterized mean and variance can be computed from simple auxiliary regressions. An appealing feature of these robyst inference procedures is that only first derivatives of the conditional mean and variance functions are needed. A monte Carlo study indicates that the asymptotic results carry over to finite samples. Estimation of several AR and AR-GARCH time series models reveals that in most sotuations the robust test statistics compare favorably to the two standard (nonrobust) formulations of the Wald and IM tests. Also, for the GARCH models and the sample sizes analyzed here, the bias in the QMLE appears to be relatively small. An empirical application to stock return volatility illustrates the potential imprtance of computing robust statistics in practice.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号